New pages

Jump to navigation Jump to search
New pages
Hide registered users | Hide bots | Hide redirects
(newest | oldest) View ( | older 50) (20 | 50 | 100 | 250 | 500)
  • 14:40, 20 October 2025Unicode: Concepts, Planes & 5-Layer Architecture (ACR/CCS/CEF/CES/TES) (hist | edit) ‎[4,905 bytes]Bfh-sts (talk | contribs) (Created page with "= Unicode and Character Encoding = Unicode is the global standard that unifies all text characters across languages and writing systems. It assigns every symbol — letters, digits, punctuation, emojis, and control marks — a unique number called a *code point*. This allows all languages to coexist in a single consistent system. == Why Unicode was needed == Before Unicode, computers used many different and incompatible encodings such as ASCII, ISO-8859, and Windows...")
  • 14:40, 20 October 2025Legacy Encodings & Code Pages (EBCDIC, ISO-8859, CP1252) (hist | edit) ‎[4,049 bytes]Bfh-sts (talk | contribs) (Created page with "= Legacy Encodings & Code Pages (EBCDIC, ISO-8859, CP1252) = ASCII was revolutionary but limited to English. As computing spread internationally, new encodings extended ASCII to support more characters. This page explains these historical encodings, how they evolved, and why Unicode replaced them. == The problem with ASCII == ASCII uses 7 bits per character, giving only 128 symbols. This excludes letters with accents (é, ä, ñ), non-Latin alphabets (Cyrillic, Gree...")
  • 14:40, 20 October 2025ASCII: Code Chart, Control Codes & End-of-Line Conventions (hist | edit) ‎[3,823 bytes]Bfh-sts (talk | contribs) (Created page with "= ASCII: Code Chart, Control Codes & End-of-Line Conventions = This page explains the ASCII standard — how it encodes characters as numbers, the role of control codes, and how text structure (like newlines) is represented. == What is ASCII == ASCII (American Standard Code for Information Interchange) was developed in the early 1960s to unify how computers represent text. Before ASCII, each manufacturer used its own incompatible encoding. ASCII defines a mapping bet...")
  • 14:40, 20 October 2025Typographic Concepts: Graphemes, Glyphs, Ligatures, Fonts (hist | edit) ‎[3,047 bytes]Bfh-sts (talk | contribs) (Created page with "= Typographic Concepts: Graphemes, Glyphs, Ligatures, Fonts = Before text can be stored digitally, it must first be understood linguistically and visually. This page introduces the key typographic and linguistic terms that underlie all character encoding systems. == Grapheme == A grapheme is the smallest unit of a writing system of a given language. In computing, it roughly corresponds to what we call a **character**. * Examples: * “A” — Latin alphabet grap...")
  • 14:40, 20 October 2025Text & Data Types: Interpreting Bits (hist | edit) ‎[4,756 bytes]Bfh-sts (talk | contribs) (Created page with "= Text & Data Types: Interpreting Bits = This page explains what a data type is, how the same bit pattern can mean very different things, and why consistent interpretation is critical for text processing and security. == Why data types matter == Inside a computer, everything is stored as 0s and 1s. The meaning comes from the data type: how we interpret the bit pattern and which operations we allow on it. Typical questions: * 01000010011010010110010101101100 — is this...")
  • 14:39, 20 October 2025Floating-Point in Practice: Absorption, Non-Associativity and Comparisons (hist | edit) ‎[3,548 bytes]Bfh-sts (talk | contribs) (Created page with "= Floating-Point in Practice: Absorption, Non-Associativity and Comparisons = Floating-point arithmetic in computers is limited by precision, which can lead to rounding errors and inconsistent results when performing sequential operations. This page discusses key pitfalls such as absorption and equality comparison issues, and how to handle them properly. == Non-Associativity == Floating-point operations are **not associative**, meaning that `(a + b) + c` may produce a d...")
  • 14:39, 20 October 2025IEEE 754 Formats, Special Values and Rounding Modes (hist | edit) ‎[3,973 bytes]Bfh-sts (talk | contribs) (Created page with "= IEEE 754 Formats, Special Values and Rounding Modes = This page covers the standard IEEE 754 formats for floating-point numbers, including their bit structure, special values, and rounding behaviors. == IEEE 754 Formats == IEEE 754 defines several binary floating-point formats. The three most common are: <syntaxhighlight lang='text'> Format | Bits | Sign | Exponent | Mantissa | Bias ------- | ---- | ---- | -------- | -------- | ---- Single pr...")
  • 14:38, 20 October 2025IEEE 754 Floating Point: Mantissa, Exponent, Bias and Hidden Bit (hist | edit) ‎[3,941 bytes]Bfh-sts (talk | contribs) (Created page with "= IEEE 754 Floating Point: Mantissa, Exponent, Bias and Hidden Bit = This page explains the IEEE 754 standard for floating-point numbers, which defines how real numbers are represented and calculated on modern CPUs. == Motivation == Fixed-point numbers have constant precision, which limits range and accuracy. For scientific and engineering calculations, numbers may vary from very large to very small. To handle this efficiently, computers use *floating-point represen...")
  • 14:38, 20 October 2025Fixed-Point Numbers & Binary Fractions: Representation and Conversion (hist | edit) ‎[3,030 bytes]Bfh-sts (talk | contribs) (Created page with "= Fixed-Point Numbers & Binary Fractions: Representation and Conversion = This page explains fixed-point numbers, how fractions are represented in binary, and how to convert between decimal and binary fixed-point notation. == Fixed-point idea == Fixed-point numbers are used to represent fractional values when floating-point hardware is unavailable or unnecessary. The idea: reserve part of the bits for the integer part and part for the fractional part. The position o...")
  • 14:38, 20 October 2025Integer Overflow & Bit-Pattern Interpretation (Signed vs Unsigned) (hist | edit) ‎[2,672 bytes]Bfh-sts (talk | contribs) (Created page with "= Integer Overflow & Bit-Pattern Interpretation (Signed vs Unsigned) = This page explains integer overflow, how bit patterns can represent different values depending on interpretation, and why signed and unsigned integers share the same arithmetic operations. == Integer overflow == An integer has a fixed number of bits. When an arithmetic operation exceeds the largest representable value, it wraps around to the smallest one. This is called overflow. Example with 8-...")
  • 14:37, 20 October 2025Signed Integers & Two’s Complement: Concept, Negation, Arithmetic, Ranges (hist | edit) ‎[3,139 bytes]Bfh-sts (talk | contribs) (Created page with "= Signed Integers & Two’s Complement: Concept, Negation, Arithmetic, Ranges = This page explains in depth how signed integers are represented in two’s complement form, how to perform calculations with them, and how to interpret value ranges. == Concept of two’s complement == Two’s complement is the standard system used by modern computers to represent signed integers. It allows addition and subtraction to work identically for positive and negative numbers. Th...")
  • 14:37, 20 October 2025Representing Numbers: Signed, Unsigned, Fixed- and Floating-Point (hist | edit) ‎[3,691 bytes]Bfh-sts (talk | contribs) (Created page with "= Representing Numbers: Signed, Unsigned, Fixed- and Floating-Point = This page introduces how numerical values are represented in computers. It explains unsigned and signed integers, overflow, fixed-point and floating-point numbers, and why different representations exist. == Positive and negative numbers == So far, we have considered only positive integers. Example: 15₁₀ = 0x0F (8-bit) or 0x0000000F (32-bit). But how do we represent negative values such as −1...")
  • 14:36, 20 October 2025Endianness: Byte Order & Practice (hist | edit) ‎[1,831 bytes]Bfh-sts (talk | contribs) (Created page with "= Endianness: Byte Order & Practice = This page explains endianness, the order in which bytes of a multi-byte value are stored or transmitted, and why it matters in computing. == Word size and bytes == Modern computers usually have a word size of 32 or 64 bits. If a value is larger than one byte, it must be split across multiple bytes. The question is: which byte comes first when storing or transmitting the value? == Big endian and little endian == * Big endian: th...")
  • 14:36, 20 October 2025Memory Sizes & Binary Prefixes (hist | edit) ‎[1,967 bytes]Bfh-sts (talk | contribs) (Created page with "= Memory Sizes & Binary Prefixes = This page explains how memory sizes are measured, why binary prefixes exist, and how to calculate with them. == Bytes and addressing == The basic unit of memory in computers is the byte (8 bits). Each byte has its own address, independent of the CPU word size. Larger values are built by grouping multiple bytes. == Decimal vs binary multiples == In everyday life, kilo means 1000, mega means 1 000 000, and so on. In computing, pow...")
  • 14:36, 20 October 2025Base Conversions: Decimal, Binary, Octal, Hex (hist | edit) ‎[2,734 bytes]Bfh-sts (talk | contribs) (Created page with "= Base Conversions: Decimal, Binary, Octal, Hex = This page explains how to convert numbers between the most important bases: decimal (10), binary (2), octal (8), and hexadecimal (16). == General method: from any base to decimal == To convert from another base into decimal, expand the number into positional values. Example: 725₈ = 7 × 8² + 2 × 8¹ + 5 × 8⁰ = 7 × 64 + 2 × 8 + 5 × 1 = 448 + 16 + 5 = 469₁₀. == General method: from decimal to anothe...")
  • 14:36, 20 October 2025Hexadecimal: Reading, Arithmetic & Shorthand for Binary (hist | edit) ‎[2,566 bytes]Bfh-sts (talk | contribs) (Created page with "= Hexadecimal: Reading, Arithmetic & Shorthand for Binary = This page introduces the hexadecimal system, explains its role as a shorthand for binary, and shows how to calculate with it. == Introduction == Hexadecimal (often called hex) uses base 16. Digits are 0–9 and A–F, where A = 10, B = 11, C = 12, D = 13, E = 14, F = 15. The number written as 10₁₆ means 16 in decimal. One hex digit represents exactly 4 binary digits (bits). This makes hexadecimal e...")
  • 14:36, 20 October 2025Octal: Reading & Conversion (hist | edit) ‎[2,308 bytes]Bfh-sts (talk | contribs) (Created page with "= Octal: Reading & Conversion = This page introduces the octal numeral system, explains how it relates to binary, and shows how to convert between octal, decimal, and binary. == Introduction == Octal uses base 8. Digits are 0–7. The number written as 10₈ means 8 in decimal. Octal was more common in early computers, especially before hexadecimal became dominant. It is still useful for compactly representing binary numbers grouped in 3 bits. == Positional va...")
  • 14:35, 20 October 2025Binary: Bits, Grouping & Powers of Two (hist | edit) ‎[3,330 bytes]Bfh-sts (talk | contribs) (Created page with "= Binary: Bits, Grouping & Powers of Two = This page explains the binary numeral system in detail, how to read and write binary numbers, and how to use powers of two for calculation. == Introduction == Binary is the most fundamental numeral system in computing. It uses base 2 with the digits 0 and 1 only. Because computer hardware operates with two states (voltage on/off), binary is a natural choice. Examples: * 0₂ = 0₁₀ * 1₂ = 1₁₀ * 10₂ = 2₁₀ * 1...")
  • 14:35, 20 October 2025Numeral Systems: Overview & Positional Notation (hist | edit) ‎[1,895 bytes]Bfh-sts (talk | contribs) (Created page with "= Numeral Systems: Overview & Positional Notation = This page introduces numeral systems, explains why different bases exist, and outlines the principle of positional notation. == Decimal system == Humans use the decimal system with base 10. * Digits are 0–9. * A number is a sequence of digits, where each digit has a positional value. * Example: 123 = 1 × 100 + 2 × 10 + 3 × 1. Other bases are also used in daily life: * Base 12 for hours on a clock. * Base 60 for m...")
  • 14:34, 20 October 2025GNU/Linux Basics: Kernel, Distros & Install (hist | edit) ‎[1,518 bytes]Bfh-sts (talk | contribs) (Created page with "= GNU/Linux Basics: Kernel, Distros & Install = This page introduces GNU/Linux as a major Unix-like operating system, explains its structure, and provides a basic overview of installation. == Kernel vs userland == Linux is technically only the kernel, which manages CPU, memory, and I/O resources. The GNU project provides essential tools such as compilers, libraries, and shells. Together they form the GNU/Linux operating system. The kernel is separated from user s...")
  • 14:34, 20 October 2025Operating Systems: Evolution, Models & Families (hist | edit) ‎[1,828 bytes]Bfh-sts (talk | contribs) (Created page with "= Operating Systems: Evolution, Models & Families = This page introduces the responsibilities of an operating system and traces its historical development from early loaders to modern multiuser systems. == Responsibilities of an operating system == An OS manages and abstracts hardware resources. Its core functions include: * Managing processes and CPU time. * Managing memory and enforcing protection. * Handling files and devices. * Providing user interfaces and networki...")
  • 14:33, 20 October 2025DMA & System Architectures (Amiga→PC→SoC) (hist | edit) ‎[1,492 bytes]Bfh-sts (talk | contribs) (Created page with "= DMA & System Architectures (Amiga→PC→SoC) = This page explains how Direct Memory Access (DMA) improves system performance and shows examples of different hardware architectures. == Direct Memory Access (DMA) == In a system without DMA, the CPU must handle every data transfer: # Read data from peripheral into a register. # Write the data from the register into memory. This creates overhead and slows down performance. With DMA, a peripheral can transfer data direc...")
  • 14:33, 20 October 2025CPU & Buses: How the CPU Talks to the World (hist | edit) ‎[1,779 bytes]Bfh-sts (talk | contribs) (Created page with "= CPU & Buses: How the CPU Talks to the World = This page introduces the internal structure of the CPU and explains how it communicates with memory and peripherals through buses. == CPU building blocks == The CPU is the central unit that executes instructions. Its main components are: * Arithmetic Logic Unit (ALU) for arithmetic and logical operations. * Registers for storing temporary values and instruction pointers. * Control logic for decoding instructions and direct...")
  • 14:33, 20 October 2025Memory in Practice: Organization, Caches, Alignment, ROM & Storage (hist | edit) ‎[3,163 bytes]Bfh-sts (talk | contribs) (Created page with "__TOC__ = Memory in Practice: Organization, Caches, Alignment, ROM & Storage = This page explains how modern memory works in practice: how dynamic RAM is organized, why caches are necessary, how alignment affects performance, and how ROM and storage devices differ from RAM. == DRAM organization and access time == Dynamic RAM (DRAM) is arranged in a grid of rows and columns. * To read a bit, a whole row (often thousands of bits) must first be activated. * After activati...")
  • 14:33, 20 October 2025Memory: From Relays to DRAM (hist | edit) ‎[3,917 bytes]Bfh-sts (talk | contribs) (Created page with "= Memory: From Relays to DRAM = This page traces the evolution of memory technology: how information was stored, from mechanical relays to modern transistor-based RAM. It explains why each step was necessary and what trade-offs were involved. == What Is Memory? == In computer science, '''memory''' is any container that holds a pattern (0/1) until it is changed. * Everyday example: a light switch. The switch remembers its position until flipped. * In computers: voltag...")
  • 14:33, 20 October 2025Hardware & OS: Overview (hist | edit) ‎[2,318 bytes]Bfh-sts (talk | contribs) (Created page with "= Hardware & OS: Overview = This page introduces the basic model of a computer and explains how the '''hardware''' and the '''operating system''' interact. It provides the mental map for the following pages. == What is a Computer Built Of? == A computer can be very simple (a microcontroller in an electric toothbrush) or highly complex (a multi-CPU server). In both cases, the core elements are the same: * '''Central Processing Unit (CPU)''': executes instructions (c...")
  • 14:32, 20 October 2025Tautologies and contradictions (hist | edit) ‎[1,234 bytes]Bfh-sts (talk | contribs) (Created page with "= Tautologies and contradictions = Tautologies and contradictions are special types of logical formulas that are always true or always false, regardless of the truth values of their components. == Tautologies == A tautology is a statement that evaluates to true under all possible interpretations. Tautologies are useful in proofs and as logical identities. === Examples === * p ∨ ¬p (Law of excluded middle) * (p → q) ∨ (q → p) * (p ∨ q) → (q ∨ p)...")
  • 14:32, 20 October 2025Contraposition (hist | edit) ‎[724 bytes]Bfh-sts (talk | contribs) (Created page with "= Contraposition = Contraposition is a transformation of an implication that produces a logically equivalent statement by reversing and negating its components. == Statement == * p → q ≡ ¬q → ¬p == Explanation == The implication "If p, then q" is equivalent to "If not q, then not p". This is often used in mathematical proofs. == Example == * "If it rains, then the ground is wet" ≡ "If the ground is not wet, then it does not rain". == Truth Table == {...")
  • 14:31, 20 October 2025Implication transformations (hist | edit) ‎[986 bytes]Bfh-sts (talk | contribs) (Created page with "= Implication transformations = Implication can be expressed using other logical operators. These transformations allow ''p → q'' to be rewritten in equivalent forms. == Statements == * p → q ≡ ¬p ∨ q * ¬(p → q) ≡ p ∧ ¬q * p ↔ q ≡ (p → q) ∧ (q → p) == Explanation == * The implication ''p → q'' is equivalent to "not p or q". * The negation of ''p → q'' is equivalent to "p and not q". * Equivalence ''p ↔ q'' can be defined using tw...")
  • 14:31, 20 October 2025Neutral and dominance laws (hist | edit) ‎[1,098 bytes]Bfh-sts (talk | contribs) (Created page with "= Absorption laws = The absorption laws show how certain combinations of conjunction and disjunction can be simplified by "absorbing" one proposition into another. == Statements == * p ∨ (p ∧ q) ≡ p * p ∧ (p ∨ q) ≡ p == Explanation == Adding extra conditions that are already implied by ''p'' does not change the truth value. These laws allow expressions to be reduced in complexity. == Examples == * "I study OR (I study AND I rest)" is logically equivalen...")
  • 14:31, 20 October 2025Absorption laws (hist | edit) ‎[1,033 bytes]Bfh-sts (talk | contribs) (Created page with "= Absorption laws = The absorption laws show how certain combinations of conjunction and disjunction can be simplified by "absorbing" one proposition into another. == Statements == * p ∨ (p ∧ q) ≡ p * p ∧ (p ∨ q) ≡ p == Explanation == Adding extra conditions that are already implied by ''p'' does not change the truth value. These laws allow expressions to be reduced in complexity. == Examples == * "I study OR (I study AND I rest)" is logically equivalen...")
  • 14:31, 20 October 2025De Morgan's laws (hist | edit) ‎[1,273 bytes]Bfh-sts (talk | contribs) (Created page with "= De Morgan's laws = De Morgan's laws describe the interaction between negation, conjunction, and disjunction. They provide rules for transforming logical statements into equivalent forms. == Statements == * ¬(p ∧ q) ≡ ¬p ∨ ¬q * ¬(p ∨ q) ≡ ¬p ∧ ¬q == Explanation == Negating a conjunction is equivalent to the disjunction of the negations. Negating a disjunction is equivalent to the conjunction of the negations. These transformations are widely u...")
  • 14:30, 20 October 2025Double negation (hist | edit) ‎[689 bytes]Bfh-sts (talk | contribs) (Created page with "= Double negation = The law of double negation states that the negation of a negation returns the original proposition. == Statement == * ¬(¬p) ≡ p == Explanation == If it is not the case that ''p'' is false, then ''p'' must be true. This allows simplification of expressions with two consecutive negations. == Example == * "It is not true that it is not raining" is equivalent to "It is raining". * In Python: <code>not (not p)</code> evaluates to the same as <cod...")
  • 14:30, 20 October 2025Law of non-contradiction (hist | edit) ‎[727 bytes]Bfh-sts (talk | contribs) (Created page with "= Law of non-contradiction = The law of non-contradiction states that a proposition and its negation cannot both be true at the same time. == Statement == * p ∧ ¬p ≡ falsch (false) == Explanation == No proposition can be simultaneously true and false. This principle is a cornerstone of classical logic and prevents contradictions in reasoning. == Example == * For ''p'' = "It is raining", it cannot be both "It is raining" and "It is not raining" at the same time...")
  • 14:30, 20 October 2025Law of excluded middle (hist | edit) ‎[802 bytes]Bfh-sts (talk | contribs) (Created page with "= Law of excluded middle = The law of excluded middle states that for any proposition ''p'', either ''p'' is true or its negation ''¬p'' is true. There is no third possibility. == Statement == * p ∨ ¬p ≡ wahr (true) == Explanation == Every proposition is either true or false, never both, and never something in between. This principle is central to classical logic, but is not accepted in some non-classical logics (e.g. intuitionistic logic). == Example == *...")
  • 14:30, 20 October 2025Distributive laws (hist | edit) ‎[1,515 bytes]Bfh-sts (talk | contribs) (Created page with "= Distributive laws = The distributive laws describe how conjunction and disjunction distribute over each other. They show that a conjunction can be distributed over a disjunction, and a disjunction can be distributed over a conjunction. == Statements == * p ∧ (q ∨ r) ≡ (p ∧ q) ∨ (p ∧ r) * p ∨ (q ∧ r) ≡ (p ∨ q) ∧ (p ∨ r) == Explanation == These rules are similar to the distributive property in arithmetic. They allow logical formulas to be r...")
  • 14:30, 20 October 2025Associative laws (hist | edit) ‎[1,353 bytes]Bfh-sts (talk | contribs) (Created page with "= Associative laws = The associative laws state that when combining three or more propositions with conjunction or disjunction, the grouping of the operations does not affect the truth value. == Statements == * (p ∧ q) ∧ r ≡ p ∧ (q ∧ r) * (p ∨ q) ∨ r ≡ p ∨ (q ∨ r) == Explanation == Parentheses can be rearranged without changing the logical meaning. == Examples == * "((I study AND I practice) AND I succeed)" is equivalent to "(I study AND (I practi...")
  • 14:29, 20 October 2025Commutative laws (hist | edit) ‎[952 bytes]Bfh-sts (talk | contribs) (Created page with "= Commutative laws = The commutative laws state that the order of propositions does not affect the truth value of conjunction or disjunction. == Statements == * p ∧ q ≡ q ∧ p * p ∨ q ≡ q ∨ p == Explanation == The logical value of a conjunction or disjunction remains unchanged when the order of the operands is swapped. == Examples == * "I study AND I pass the exam" is equivalent to "I pass the exam AND I study". * "It rains OR it snows" is equivalent to "I...")
  • 14:29, 20 October 2025Idempotent laws (hist | edit) ‎[750 bytes]Bfh-sts (talk | contribs) (Created page with "= Idempotent laws = The idempotent laws state that combining a proposition with itself using either conjunction or disjunction does not change its truth value. == Statements == * p ∧ p ≡ p * p ∨ p ≡ p == Explanation == A proposition combined with itself is logically equivalent to the proposition alone. == Examples == * "I study AND I study" is logically the same as "I study". * "I study OR I study" is also logically the same as "I study". == Truth Table (Con...")
  • 14:29, 20 October 2025Peirce arrow (hist | edit) ‎[889 bytes]Bfh-sts (talk | contribs) (Created page with "= Peirce arrow = The Peirce arrow (also called ''NOR'') is a logical operation that returns true only when both propositions are false. It is functionally complete, meaning all other logical operations can be expressed in terms of it. == Symbols == * p ↓ q (mathematical notation) * p NOR q (common name) * ¬(p ∨ q) (definition) == Definition == The Peirce arrow produces the negation of disjunction. == Truth Table == {| class="wikitable" ! p !! q !! p ↓ q |- |...")
  • 14:29, 20 October 2025Exclusive disjunction (hist | edit) ‎[959 bytes]Bfh-sts (talk | contribs) (Created page with "= Exclusive disjunction = Exclusive disjunction (often abbreviated as XOR) is the logical operation that returns true if exactly one of the propositions is true, but not both. == Symbols == * p ⊕ q (standard notation) * p XOR q (common in computer science) * (p ∨ q) ∧ ¬(p ∧ q) (definition using basic operators) == Definition == The exclusive disjunction ''p ⊕ q'' is true if either ''p'' or ''q'' is true, but false if both are true or both are false. == Tru...")
  • 14:28, 20 October 2025Sheffer stroke (hist | edit) ‎[991 bytes]Bfh-sts (talk | contribs) (Created page with "= Sheffer stroke = The Sheffer stroke (also called ''NAND'') is a logical operation that returns true unless both propositions are true. It is functionally complete, meaning all other logical operations can be built from it. == Symbols == * p ↑ q (mathematical notation) * p NAND q (common name) * NOT (p ∧ q) (definition) == Definition == The Sheffer stroke produces the negation of conjunction. == Truth Table == {| class="wikitab...")
  • 14:28, 20 October 2025Equivalence (hist | edit) ‎[897 bytes]Bfh-sts (talk | contribs) (Created page with "= Equivalence = Equivalence is the logical operation corresponding to "IF AND ONLY IF". It states that two propositions are logically identical in truth value. == Symbols == * p ↔ q (standard notation) * p ⇔ q (alternative) * p IFF q (short for "if and only if") == Definition == The equivalence ''p ↔ q'' is true if ''p'' and ''q'' have the same truth value. It is false if their truth values differ. == Truth Table == {| class="wikitable" ! p !! q !! p ↔ q...")
  • 14:28, 20 October 2025Implication (hist | edit) ‎[943 bytes]Bfh-sts (talk | contribs) (Created page with "= Implication = Implication is the logical operation corresponding to "IF ... THEN". It expresses that if one proposition holds, then another must also hold. == Symbols == * p → q (standard notation) * p ⊃ q (alternative) * IF p THEN q (verbal) == Definition == The implication ''p → q'' is false only when ''p'' is true and ''q'' is false. In all other cases it is true. Additionally, ''p → q'' can be reformed into ''¬ p ∨ q'' == Truth Table == {| class...")
  • 14:28, 20 October 2025Disjunction (hist | edit) ‎[766 bytes]Bfh-sts (talk | contribs) (Created page with "= Disjunction = Disjunction is the logical operation corresponding to "OR". It returns true if at least one of the propositions is true. == Symbols == * p ∨ q (standard notation) * p | q (alternative) * p OR q (in programming) == Definition == The disjunction of ''p'' and ''q'' is true if either ''p'', or ''q'', or both are true. == Truth Table == {| class="wikitable" ! p !! q !! p ∨ q |- | T || T || T |- | T || F || T |- | F || T || T |- | F || F || F |} == E...")
  • 14:28, 20 October 2025Conjunction (hist | edit) ‎[748 bytes]Bfh-sts (talk | contribs) (Created page with "= Conjunction = Conjunction is the logical operation corresponding to "AND". It returns true only if both propositions are true. == Symbols == * p ∧ q (standard notation) * p & q (common alternative) * p AND q (in programming) == Definition == The conjunction of ''p'' and ''q'' is true if and only if both are true. == Truth Table == {| class="wikitable" ! p !! q !! p ∧ q |- | T || T || T |- | T || F || F |- | F || T || F |- | F || F || F |} == Examples == * If...")
  • 14:27, 20 October 2025Negation (hist | edit) ‎[681 bytes]Bfh-sts (talk | contribs) (Created page with "= Negation = Negation is the logical operation that inverts the truth value of a proposition. If a proposition ''p'' is true, then its negation ''¬p'' is false, and vice versa. == Symbols == * ¬p (standard notation) * ~p (alternative notation) * NOT p (common in programming) == Definition == Negation produces the opposite truth value of its operand. == Truth Table == {| class="wikitable" ! p !! ¬p |- | T || F |- | F || T |} == Examples == * If ''p'' = "It is ra...")
  • 14:27, 20 October 2025Data Types (hist | edit) ‎[13,061 bytes]Bfh-sts (talk | contribs) (Created page with "= Java Data Type Categories = Java groups data into two families: '''primitive types''' (hold simple values) and '''reference types''' (hold references to objects on the heap). Unlike Python’s dynamic typing, Java is '''statically typed''': every variable has a declared type, checked at compile time. If you’ve used Go, that idea will feel familiar. == Quick map (what to reach for) == * Whole numbers: int (default) → long for timestamps/big ranges * Real numbers: d...")
  • 13:53, 20 October 2025MediaWiki cheat sheet (hist | edit) ‎[1,981 bytes]Bfh-sts (talk | contribs) (Created page with "= Cheat Sheet for the Mediawiki = = Overview = A list of commonly used MediaWiki formats. = Tips = To create a newline, you need to press Enter twice: <pre> This must be on a new line </pre> This must be on a new line <pre> This must be on a new line </pre> This must be on a new line = Lists = <pre> * One ** Two *** Three </pre> One ** Two *** Three <pre> # One # Two ## Two.one ### Two.one.one </pre> One Two Two.one Two.one.one = Bold and italic = <...")
  • 09:50, 20 October 2025Main Page (hist | edit) ‎[3,637 bytes]Bfh-sts (talk | contribs) (Created page with "This is a category for all BFH Related documentations == Module == * Category: Kommunikation 1 Deutsch für die Informatik (BZG3110p) 25/26 * Category: Programming 1 with Java (BTI1001q) 25/26 * Category: Diskrete Mathematik I (BZG1155pa) 25/26 * Category: Computer Science Basics (BTI1021p) 25/26")
(newest | oldest) View ( | older 50) (20 | 50 | 100 | 250 | 500)