In the past 70 years electronic computing has fundamentally changed how we live our lives, and we believe it’s just getting started. From ubiquitous computing, artificial intelligence, and self-driving cars to brain computer interfaces, wearable computers, and maybe even the singularity there is so much amazing potential on the horizon.
Today we’re going to go a little meta and talk about how computer science can support learning with educational technology. We here at Crash Course are big fans of interactive in-class learning and hands-on experiences, but we also believe in the additive power of educational technology inside and outside the classroom from the Internet itself.
So today, we’re going to discuss some psychological considerations in building computers like how to make them easier for humans to use, the uncanny valley problem when humanoid robots gets more and more humanlike, and strategies to make our devices work better with us by incorporating our emotions and even altering our gaze.
Robots are often thought as a technology of the future, but they're already here by the millions in the workplace, our homes, and pretty soon on the roads. We'll discuss the origins of robotics to its proliferation, and even look at some common control designs that were implemented to make them more useful in the workplace.
As computers play an increasing role in our daily lives there has been an growing demand for voice user interfaces, but speech is also terribly complicated. Vocabularies are diverse, sentence structures can often dictate the meaning of certain words, and computers also have to deal with accents, mispronunciations, and many common linguistic faux pas.
We’ve long known that our digital cameras and smartphones can take incredibly detailed images, but taking pictures is not quite the same thing. For the past half-century, computer scientists have been working to help our computing devices understand the imagery they capture, leading to advancements everywhere, from tracking hands and whole bodies to biometrics to unlock our phones.
From spam filters and self-driving cars, to cutting edge medical diagnosis and real-time language translation, there has been an increasing need for our computers to learn from data and apply that knowledge to make predictions and decisions. This is the heart of machine learning which sits inside the more ambitious goal of artificial intelligence.
We’re going to walk you through some common encryption techniques such as the Advanced Encryption Standard (AES), Diffie-Hellman Key Exchange, and RSA which are employed to keep your information safe, private, and secure.
Now, not all hackers are are malicious cybercriminals intent on stealing your data (these people are known as Black Hats). There are also White Hats who hunt for bugs, close security holes, and perform security evaluations for companies. And there are a lot of different motivations for hackers.
In today’s episode, we’re going to unpack these three goals and talk through some strategies we use like passwords, biometrics, and access privileges to keep our information as secure, but also as accessible as possible.
The World Wide Web is built on the foundation of simply linking pages to other pages with hyperlinks, but it is this massive interconnectedness that makes it so powerful. But before the web could become a thing, Tim Berners-Lee would need to invent the web browser at CERN, and search engines would need to be created to navigate these massive directories of information.
Specifically, how that stream of characters you punch into your browser's address bar, like "youtube.com", return a website. Just to clarify, we're talking in a broader sense about that massive network of networks connecting millions of computers together.
We’re going to begin with computer networks, and how they grew from small groups of connected computers on LAN networks to eventually larger worldwide networks like the ARPANET and even the Internet we know today.
From polygon count and meshes, to lighting and texturing, there are a lot of considerations in building the 3D objects we see in our movies and video games, but then displaying these 3D objects of a 2D surface adds an additional number of challenges. So we’ll talk about some of the reasons you see occasional glitches in your video games as well as the reason a dedicated graphics processing unit.
Today, we're going to discuss the critical role graphical user interfaces, or GUIs played in the adoption of computers. Before the mid 1980's the most common way people could interact with their devices was through command line interfaces, which though efficient, aren't really designed for casual users. This all changed with the introduction of the Macintosh by Apple in 1984.
Today we're going to talk about the birth of personal computing. Up until the early 1970s components were just too expensive, or underpowered, for making a useful computer for an individual, but this would begin to change with the introduction of the Altair 8800 in 1975. In the years that follow, we'll see the founding of Microsoft and Apple and the creation of the 1977 Trinity: The Apple II, Tand
Today we’re going to step back from hardware and software, and take a closer look at how the backdrop of the cold war and space race and the rise of consumerism and globalization brought us from huge, expensive codebreaking machines in the 1940s to affordable handhelds and personal computers in the 1970s. This is an era that saw huge government funded projects - like the race to the moon.
Today we begin our discussion of computer graphics. So we ended last episode with the proliferation of command line (or text) interfaces, which sometimes used screens, but typically electronic typewriters or teletypes onto paper. But by the early 1960s a number of technologies were introduced to make screens much more useful from cathode ray tubes and graphics cards to ASCII art and light pens. Th
Today, we are going to start our discussion on user experience. We've talked a lot in this series about how computers move data around within the computer, but not so much about our role in the process. So today, we're going to look at our earliest form of interaction through keyboards. We'll talk about how the keyboard got its qwerty layout, and then we'll track its evolution in electronic typewr
So last episode we talked about some basic file formats, but what we didn’t talk about is compression. Often files are way too large to be easily stored on hard drives or transferred over the Internet - the solution, unsurprisingly, is to make them smaller. Today, we’re going to talk about lossless compression, which will give you the exact same thing when reassembled, as well as lossy compression