diff --git a/_includes/event-image.html b/_includes/event-image.html
index a530371..2145d8d 100644
--- a/_includes/event-image.html
+++ b/_includes/event-image.html
@@ -1,7 +1,7 @@
Unhackathon
- October 15 & 16, 2016 | New York City
+ Spring 2017 Coming Soon! | New York City
diff --git a/_includes/flora-starter.md b/_includes/flora-starter.md
index e35990f..e31e437 100644
--- a/_includes/flora-starter.md
+++ b/_includes/flora-starter.md
@@ -1,10 +1,10 @@
### What is a microcontroller, anyway?
-A microcontroller is nothing more than a very small, low powered computer on a chip. They are generally small enough to run off of no more than the USB power off you computer or a few triple-A batteries, and they tend to have very little processing power, with clock speeds measuring in the MegaHertz. However, because of their simple instruction set and lower power consumption, they are very useful in applications where something needs be constantly running, whether on wall power or batteries.
+A microcontroller is nothing more than a very small, low powered computer on a chip. They are generally small enough to run off no more than the USB power from your laptop, or a few triple-A batteries. They tend to have very little processing power, with clock speeds measuring in the megahertz. Because of their simple instruction set and lower power consumption, they are very useful in applications where something needs be constantly running, whether on wall power or batteries.
-Microcontrollers deal with their relative lack of power in several ways. The first is that in general microcontrollers never run an operating system. Code is run directly on the hardware, and only one program at a time is running, giving it complete control over system memory. Because of the need for complete control, microcontroller software is written in C/C++, ensuring that every byte of memory is counted for, as most microcontrollers only have a few KB (Yes, you read that correctly) of memory. Even though the microcontrollers lack the power of modern day computers or even your phone, because of their durability, versatility, and affordability (around $20 for a whole board), they are frequently used in DIY electronics and art projects.
+Microcontrollers deal with their relative lack of power in several ways. The first is that in general microcontrollers never run an operating system. Code is run directly on the hardware, and only one program at a time is running, giving it complete control over system memory. Because of the need for complete control, microcontroller software is written in C/C++, ensuring that every byte of memory is accounted for, as most microcontrollers only have a few KB of memory. Even though the microcontrollers lack the power of modern day computers or even your phone, because of their durability, versatility, and affordability (around $20 for a whole board), they are frequently used in DIY electronics and art projects.
-Most hobyists don't use microcontrollers directly. Instead, they are generally mounted on a printed circuit board, which has headers that allow wires to be plugged in, or easily solderable connectors to interface with the board. They also generally have a USB port to allow the chips to be easily reprogrammed from a computer. All of these connections let us connect all sorts of fun devices to our microcontroller, including:
+Most hobbyists don't use microcontrollers directly. Instead, microcontrollers are generally mounted on printed circuit boards, which make it easy to connect wires by plugging them in or soldering. They also generally have a USB port to allow the chips to be easily reprogrammed from a computer. All of these connections let us connect all sorts of fun devices to our microcontroller, including:
* switches
* buttons
diff --git a/_sass/_index.scss b/_sass/_index.scss
index f9f314c..afd85c9 100644
--- a/_sass/_index.scss
+++ b/_sass/_index.scss
@@ -30,6 +30,9 @@
margin: 20px;
margin-bottom: 10px;
margin-top: 0;
+ &.right {
+ float: right;
+ }
}
.clearboth {
diff --git a/_sass/_layout.scss b/_sass/_layout.scss
index 399caca..8629e94 100644
--- a/_sass/_layout.scss
+++ b/_sass/_layout.scss
@@ -82,7 +82,7 @@ p .apply-span {
}
a {
- color: #E1001A;
+ color: #34BCA3;
}
.schedule {
@@ -97,7 +97,7 @@ a {
}
a:visited {
- color: #E1001A;
+ color: #34BCA3;
}
#softheon-logo {
diff --git a/_sass/_springboard.scss b/_sass/_springboard.scss
index 4cd3468..9536f67 100644
--- a/_sass/_springboard.scss
+++ b/_sass/_springboard.scss
@@ -14,7 +14,7 @@ $springboard-width: 600px;
margin-left: auto;
margin-right: auto;
}
- >p,>ul,>pre,>div,>ol {
+ >p,>ul,>pre,>div,>ol,>figure {
max-width: $springboard-width;
margin-left: auto;
margin-right: auto;
diff --git a/_sass/typography/_base.scss b/_sass/typography/_base.scss
index 96f4ac6..f74def8 100644
--- a/_sass/typography/_base.scss
+++ b/_sass/typography/_base.scss
@@ -121,7 +121,7 @@ $code-padding: rem-calc(2 5 1) !default;
/// Default color for links.
/// @type Color
-$anchor-color: $primary-color !default;
+$anchor-color: #34BCA3 !default;
/// Default color for links on hover.
/// @type Color
@@ -418,16 +418,6 @@ $abbr-underline: 1px dotted $black !default;
border-bottom: $abbr-underline;
}
- // Code
- code {
- font-family: $code-font-family;
- font-weight: $code-font-weight;
- color: $code-color;
- background-color: $code-background;
- border: $code-border;
- padding: $code-padding;
- }
-
// Keystrokes
kbd {
padding: $keystroke-padding;
diff --git a/css/main.scss b/css/main.scss
index 8dd3ae1..6a61e71 100755
--- a/css/main.scss
+++ b/css/main.scss
@@ -28,8 +28,6 @@ $content-width: 800px;
$on-palm: 600px;
$on-laptop: 800px;
-
-
// Using media queries with like this:
// @include media-query($on-palm) {
// .wrapper {
diff --git a/img/index-pic2.jpg b/img/index-pic2.jpg
new file mode 100644
index 0000000..ca346b9
Binary files /dev/null and b/img/index-pic2.jpg differ
diff --git a/index.md b/index.md
index e9d2a01..6320d9d 100644
--- a/index.md
+++ b/index.md
@@ -1,13 +1,17 @@
---
layout: index
-title: A New Kind of Hackathon: October 15 & 16, NYC
+title: A New Kind of Hackathon: NYC
---
-This fall’s Unhackathon will take place over two days! Saturday will be an afternoon of workshops, tech talks, meeting other hackers, and learning new skills you've always wanted to have. Many hackers will start Springboard Projects, laying the groundwork for epic hacks the next day. We will support you as you learn and experiment, getting ready to hit the ground running the next morning. Sunday will be a full day of hacking on projects, ending in demos and prizes.
+**We're very sad to announce that due to a last-minute venue unavailability, our next two-day event will be postponed to Spring 2017.** As always, thank you to everyone for your enthusiasm for Unhackathon - we're sorry we'll have to wait until the spring to see you all together! This fall, our team will instead be busy developing new [Springboard Projects](/springboard-projects/) and volunteering at hackathons around the Northeast. If you are organizing a hackathon or technology learning event (no matter how small) for students from elementary school on up, we'd love to help through workshops, springboard projects, mentorship, and more. [Contact us](mailto:team@unhackathon.org) any time!
-Unhackathon welcomes hackers of all backgrounds and experience levels. Students in college and high school are eligible to be Unhackathon hackers. If you're a middle school student and would like to attend, you are welcome as long as you bring a parent with you! Please see our Welcome Statement for more information on our dedication to inclusivity, and our Springboard Projects for hackspiration!
+Our next Unhackathon will take place over two days! Saturday will be an afternoon of workshops, tech talks, meeting other hackers, and learning new skills you’ve always wanted to have. Many hackers will start Springboard Projects, laying the groundwork for epic hacks the next day. We will support you as you learn and experiment, getting ready to hit the ground running the next morning. Sunday will be a full day of hacking on projects, ending in demos and prizes.
+
+
+
+Unhackathon welcomes hackers of all backgrounds and experience levels. Students in college and high school are eligible to be Unhackathon hackers. If you’re a middle school student and would like to attend, you are welcome as long as you bring a parent with you! Please see our Welcome Statement for more information on our dedication to inclusivity, and our Springboard Projects for hackspiration!
diff --git a/springboard-projects/web-crawler.md b/springboard-projects/web-crawler.md
index a3d1ac6..3686652 100644
--- a/springboard-projects/web-crawler.md
+++ b/springboard-projects/web-crawler.md
@@ -2,111 +2,118 @@
layout: springboard
title: Write You a Web Crawler
---
-# Write You a Web Crawler
+**Note:** This tutorial uses the Unhackathon website (http://unhackathon.org/) as our example starting point for web crawling. Please be aware that some of the sample outputs may be a bit different, since the Unhackathon website is updated occasionally.
-## Introduction
+# Write You a Web Crawler
This springboard project will have you build a simple web crawler in Python using the Requests library. Once you have implemented a basic web crawler and understand how it works, you will have numerous opportunities to expand your crawler to solve interesting problems.
-If you're feeling adventurous or dislike Python, you may decide to implement your web crawler using different technologies. We've suggested some additional resources for this in an appendix.
+# Tutorial
-## Tutorial
+## Assumptions
-### Assumptions
-
-This tutorial assumes that you have Python 3 installed on your machine. If you do not have Python installed (or you have an earlier version installed) you can find the latest Python builds at [https://www.python.org/downloads/](https://wwww.python.org/downloads/). Make sure you have the correct version in your environment variables.
+This tutorial assumes that you have Python 3 installed on your machine. If you do not have Python installed (or you have an earlier version installed) you can find the latest Python builds at [https://www.python.org/downloads/](). Make sure you have the correct version in your environment variables.
We will use pip to install packages. Make sure you have that installed as well. Sometimes this is installed as pip3 to differentiate between versions of pip built with Python 2 or Python 3; if this is the case, be mindful to use the pip3 command instead of pip while following along in the tutorial.
We also assume that you’ll be working from the command line. You may use an IDE if you choose, but some aspects of this guide will not apply.
-This guide assumes only basic programming ability and knowledge of data structures and Python. If you’re more advanced, feel free to use it as a reference rather than a step by step tutorial. If you haven’t used Python and can’t follow along, check out the official Python tutorial at [https://docs.python.org/3/tutorial/](https://docs.python.org/3/tutorial/) and/or Codecademy’s Python class at [https://www.codecademy.com/tracks/python](https://www.codecademy.com/tracks/python).
+This guide assumes only basic programming ability and knowledge of data structures and Python. If you’re more advanced, feel free to use it as a reference rather than a step by step tutorial. If you haven’t used Python and can’t follow along, check out the official Python tutorial at [https://docs.python.org/3/tutorial/]() and/or Codecademy’s Python class at [https://www.codecademy.com/tracks/python]().
-### Setting up your project
+## Setting up your project
Let’s get the basic setup out of the way now. (Next we’ll give a general overview of the project, and then we’ll jump into writing some code.)
-Type the following in terminal:
+If you're on OS X or Linux, type the following in terminal:
-{% highlight bash %}
+```
mkdir webcrawler
cd webcrawler
pip3 install virtualenv
-virtualenv venv
+virtualenv -p python3 venv
source venv/bin/activate
pip3 install requests
-{% endhighlight %}
+```
+
+If you are a Windows user, replace `source venv/bin/activate` with `\path\to\venv\Scripts\activate`.
+
+If pip and/or virtualenv cannot be found, you'll need to update your `$PATH` variable or use the full path to the program.
You’ve just made a directory to hold your project, set up a virtual environment in which your Python packages won’t interfere with those in your system environment, and we’ve installed Requests, the “HTTP for Humans” library for Python, which is the primary library we’ll be using to build our web crawler. If you’re confused by any of this you may want to ask a mentor to explain bash and/or package managers. You might also have issues due to system differences; let us know if you get stuck.
-### Web crawler overview
+## Web crawler overview
-Web crawlers are pretty simple. Starting from a certain URL (or a list of URLs), they will check the HTML at that URL for links (and other information) and then follow those links to repeat the process. A web crawler is the basis of many popular tools such as search engines (though search engines such as Google have much harder problems such as “How do we index this information so that it is searchable?”).
+Web crawlers are pretty simple, at least at first pass (like most things they get harder once you start to take things like scalability and performance into consideration). Starting from a certain URL (or a list of URLs), they will check the HTML document at each URL for links (and other information) and then follow those links to repeat the process. A web crawler is the basis of many popular tools such as search engines (though search engines such as Google have much harder problems such as “How do we index this information so that it is searchable?”).
-### Making our first HTTP request
+## Making our first HTTP request
Before we can continue, we need to know how to make an HTTP request using the Requests library and, also, how to manipulate the data we receive from the response to that request.
-In a text editor, create a file `webcrawler.py` and we’ll now edit that file:
+In a text editor, create a file 'webcrawler.py'. We’ll now edit that file:
-{% highlight python %}
+```
import requests
r = requests.get('http://unhackathon.org/')
-{% endhighlight %}
+```
This code gives us access to the Requests library on line one and uses the `get` method from that library to create a `Response` object called `r`.
If we enter the same code in the Python interactive shell (type `python3` in the terminal to access the Python shell) we can examine `r` in more depth:
-{% highlight python %}
+```
+python3
+>>> import requests
+>>> r = requests.get('http://unhackathon.org/')
>>> r
-{% endhighlight %}
+```
Entering the variable `r`, we get told that we have a response object with status code 200. (If you get a different error code, the Unhackathon website might be down.) 200 is the standard response for successful HTTP requests. The actual response will depend on the request method used. In a GET request, the response will contain an entity corresponding to the requested resource. In a POST request, the response will contain an entity describing or containing the result of the action. Our `requests.get` method in Python is making a HTTP GET request under the surface, so our response contains the home page of unhackathon.org and associated metadata.
-{% highlight python %}
+```
>>> dir(r)
['__attrs__', '__bool__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__iter__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__nonzero__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setstate__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_content', '_content_consumed', 'apparent_encoding', 'close', 'connection', 'content', 'cookies', 'elapsed', 'encoding', 'headers', 'history', 'is_permanent_redirect', 'is_redirect', 'iter_content', 'iter_lines', 'json', 'links', 'ok', 'raise_for_status', 'raw', 'reason', 'request', 'status_code', 'text', 'url']
-{% endhighlight %} {:class='wrap-code'}
+```
We can examine `r` in more detail with Python’s built-in function `dir` as above. From the list of properties above, it looks like `‘content’` might be of interest.
-{% highlight python %}
+```
>>> r.content
-b'\n\n\n \n \n \n \n \n \n \n \n \n\n Unhackathon\n…
-{% endhighlight %} {:class='wrap-code'}
+b'\n\n\n \n \n \n \n \n \n \n \n \n\n Unhackathon\n…
+```
Indeed, it is. Our `r.content` object contains the HTML from the Unhackathon home page. This will be helpful in the next section.
-Before we go on though, perhaps you’d like to become acquainted with some of the other properties. Know what data is available might help you as you think of ways to expand this project after the tutorial is completed.
-Finding URLs
+Before we go on though, perhaps you’d like to become acquainted with some of the other properties. Knowing what data is available might help you as you think of ways to expand this project after the tutorial is completed.
+
+## Finding URLs
Before we can follow new links, we must extract the URLs from the HTML of the page we’ve already requested. The regular expressions library in Python is useful for this.
-{% highlight python %}
+```
>>> import re
>>> links = re.findall('(.*?)', str(r.content))
>>> links
[('/code-of-conduct/', 'Code of Conduct'), ('mailto:sponsorship@unhackathon.org', 'sponsorship@unhackathon.org'), ('/faq/', 'frequently asked questions'), ('mailto:team@unhackathon.org', 'team@unhackathon.org')]
-{% endhighlight %} {:class='wrap-code'}
+```
We pass the `re.findall` method our regular expression, capturing on the URL and the link text, though we only really need the former. It also gets passed the `r.content` object that we are searching in, which will need to be cast to a string. We are returned a list of tuples, containing the strings captured by the regular expression.
-There are two main things to note here, the first being that some of our URLs are for other protocols than HTTP. It doesn’t make sense to access these with an HTTP request, and doing so will only result in an exception. So before we move on, you’ll want to eliminate the strings beginning with “mailto”, “ftp”, “file”, etc. Similarly, links pointing to `127.0.0.1` or `localhost` are of little use to us if we want our results to be publicly accessible.
+There are two main things to note here, the first being that some of our URLs are for other protocols than HTTP. It doesn’t
+make sense to access these with an HTTP request, and doing so will only result in an exception. So before we move on, you’ll want to eliminate the strings beginning with “mailto”, “ftp”, “file”, etc. Similarly, links pointing to `127.0.0.1` or `localhost` are of little use to us if we want our results to be publicly accessible.
-The second thing to note is that some of our URLs are relative so we can’t use that as our argument to `requests.get`. We will need to take the relative URLs and append them to the URL of the page we are on when we find those relative URLs. For example, we found an `‘/faq/’` URL above. This will need to become `‘http://unhackathon.org/faq/’`. This is usually simple string concatenation, but be careful: a relative URL may indicate a parent directory using two dots and then things become more complicated.
+The second thing to note is that some of our URLs are relative so we can’t use that as our argument to `requests.get`. We will need to take the relative URLs and append them to the URL of the page we are on when we find those relative URLs. For example, we found an `‘/faq/’` URL above. This will need to become `‘http://unhackathon.org/faq/’`. This is usually simple string concatenation, but be careful: a relative URL may indicate a parent directory (using two dots) and then things become more complicated. See if you can come up with a solution.
-### Following URLs
+## Following URLs
-Assume we now have a list of full URLs. We’ll now recursively request the content at those URLs and extract new URLs from that content.
+We hopefully now have a list of full URLs. We’ll now recursively request the content at those URLs and extract new URLs from that content.
We’ll want to maintain a list of URLs we’ve already requested (this is mainly what we’re after at this point and it also helps prevents us from getting stuck in a non-terminating loop) and a list of valid URLs we’ve discovered but have yet to request.
-{% highlight python %}
+```
import requests
import re
@@ -118,7 +125,7 @@ def crawl_web(initial_url):
current_url = to_crawl.pop(0)
r = requests.get(current_url)
crawled.append(current_url)
- for url in re.findall('', str(r.content)):
+ for url in re.findall('', str(r.content)):
if url[0] == '/':
url = current_url + url
pattern = re.compile('https?')
@@ -127,25 +134,25 @@ def crawl_web(initial_url):
return crawled
print(crawl_web('http://unhackathon.org'))
-{% endhighlight %}
+```
This code will probably have issues if we feed it a page for the initial URL that is part of a site that isn’t self-contained. (Say, for instance, that `unhackathon.org` links to `yahoo.com`; we’ll never reach the last page since we’ll always be adding new URLs to our `to_crawl` list.) We’ll discuss several strategies to deal with this issue in the next section.
-### Strategies to prevent never ending processes
+## Strategies to prevent never ending processes
-#### Counter
+### Counter
-Perhaps we are satisfied once we have crawled n number of sites. Modify `crawl_web `to take a second argument (an integer n) and add logic before the loop so that we return our list once it’s length meets our requirements.
+Perhaps we are satisfied once we have crawled n number of sites. Modify `crawl_web` to take a second argument (an integer n) and add logic before the loop so that we return our list once it’s length meets our requirements.
-#### Timer
+### Timer
Perhaps we have an allotted time in which to run our program. Instead of passing in a maximum number of URLs we can pass in any number of seconds and revise our program to return the list of crawled URLs once that time has passed.
-#### Generators
+### Generators
We can also use generators to receive results as they are discovered, instead of waiting for all of our code to run and return a list at the end. This doesn’t solve the issue of a long running process (though one could terminate the process by hand with Ctrl-C) but it is useful for seeing our script’s progress.
-{% highlight python %}
+```
import requests
import re
@@ -166,16 +173,16 @@ def crawl_web(initial_url):
yield current_url
-crawl_web_generator = crawl_web('http://ctogden.com')
+crawl_web_generator = crawl_web('http://unhackathon.org')
for result in crawl_web_generator:
print(result)
-{% endhighlight %}
+```
-Generators were introduced with PEP 255 ([https://www.python.org/dev/peps/pep-0255/](https://www.python.org/dev/peps/pep-0255)). You can find more about them by Googling for ‘python generators’ or ‘python yield’.
+Generators were introduced with PEP 255 ([https://www.python.org/dev/peps/pep-0255/]()). You can find more about them by Googling for ‘python generators’ or ‘python yield’.
Perhaps you should combine the approach of using generators with another approach. Also, can you think of any other methods that may be of use?
-### Robots.txt
+## Robots.txt
Robots.txt is a standard for asking “robots” (web crawlers and similar tools) not to crawl certain sites or pages. While it’s easy to ignore these requests, it’s generally a nice thing to account for. Robots.txt files are found in the root directory of a site, so before you crawl `example.com/` it’s a simple matter to check `example.com/robots.txt` for any exclusions. To keep things simple you are looking for the following directives:
@@ -184,30 +191,30 @@ User-agent: *
Disallow: /
```
-Pages may also allow/disallow certain pages instead of all pages. Check out [https://en.wikipedia.org/robots.txt](https://en.wikipedia.org/robots.txt) for an example.
+Pages may also allow/disallow certain pages instead of all pages. Check out [https://en.wikipedia.org/robots.txt]() for an example.
-### Further Exercises
+## Further Exercises
1. Modify your program to follow `robots.txt` rules if found.
-Right now, our regular expression will not capture links that are more complicated than `` or ``. For example, ``` will fail because we do not allow for anything but a space between `a` and `href`. Modify the regular expression to make sure we’re following all the links. Check out [https://regex101.com/](https://regex101.com/) if you’re having trouble.
-2. Our program involves a graph traversal. Right now our algorithm resembles bread-first search. What simple change can we make to get depth-first search?
-3. In addition to each page’s URL, also print its title and the number of child links, in CSV format.
-4. Instead of CSV format, print results in JSON. Can you print a single JSON document while using generators? You can validate your JSON at [http://pro.jsonlint.com/](http://pro.jsonlint.com/).
-5. There are some bugs in our code above. Can you find them and fix them?
+2. Right now, our regular expression will not capture links that are more complicated than `` or ``. For example, `` will fail because we do not allow for anything but a space between `a` and `href`. Modify the regular expression to make sure we’re following all the links. Check out [https://regex101.com/]() if you’re having trouble.
+3. Our program involves a graph traversal. Right now our algorithm resembles bread-first search. What simple change can we make to get depth-first search? Can you think of a scenario where this makes a difference?
+4. In addition to each page’s URL, also print its title and the number of child links, in CSV format.
+5. Instead of CSV format, print results in JSON. Can you print a single JSON document while using generators? You can validate your JSON at [http://pro.jsonlint.com/]().
-### Conclusion
+## Conclusion
This concludes the tutorial. We hope it illustrated the basic concepts at work in building a web crawler. Perhaps now is a good time to step back and review your code. You might want to do some refactoring, or even write some tests to help prevent you from breaking what you have working now as you modify it to expand its functionality.
As you consider where to go next, remember we’re available to answer any questions you might have. Cheers!
-## What Next?
+# What Next?
This tutorial above was just intended to get you started. Now that you’ve completed it, there are many options for branching off and creating something of your own. Here are some ideas:
-* Import your JSON into a RethinkDB database and then create an app that queries against that database. Or analyze the data with a number of queries and visualize the results.
+
+* Import your JSON into a [RethinkDB](https://www.rethinkdb.com/) database and then create an app that queries against that database. Or analyze the data with a number of queries and visualize the results.
* Analyze HTTP header fields. For example, one could compile statistics on different languages used on the backend using the X-Powered-By field.
-* Implement the functionality of Scrapy using a lower level library, such as Requests.
+* Implement the functionality of [Scrapy](http://scrapy.org/) using a lower level library, such as Requests.
* Set up your web crawler to repeatedly crawl a site at a set intervals to check for new pages or changes to content. List the URLs of changed/added/deleted pages or perhaps even a diff of the changes. This could be part of a tool to detect malicious changes on hacked websites or to hold news sites accountable for unannounced edits or retractions.
* Use your crawler to monitor a site or forum for mentions of your name or internet handle. Trigger an email or text notification whenever someone uses your name.
* Maintain a graph of the links you crawl and visualize the connectedness of certain websites.
@@ -218,8 +225,3 @@ This tutorial above was just intended to get you started. Now that you’ve comp
* Starting from a list of startups, crawl their sites for pages mentioning “job”/”careers”/”hiring” and from those scrape job listings. Use these to create a job board.
Or perhaps you have an idea of your own. If so, we look forward to hearing about it!
-
-## Additional Resources
-
-* [Scrapy](http://scrapy.org/) is a scraping library for Python. It is higher level than Requests.
-* [Request](https://www.npmjs.com/package/request) is a "Simplified HTTP request client" for Node.js. It might be useful if you’d rather use JavaScript/Node than Python.
diff --git a/springboard-projects/websockets.md b/springboard-projects/websockets.md
index 79e9262..c5c1844 100644
--- a/springboard-projects/websockets.md
+++ b/springboard-projects/websockets.md
@@ -127,7 +127,7 @@ Now when you run main.go, and hit localhost:8080, you should see a red "Hello, W
### Javascript drawer
-As our next step, we need the actual 'drawing' part of our application. We will be using a javascript "canvas" element to custom drawing. Canvases come in two flavors, 2D and 3D, but since we're just creating a drawing App, we will stick to the 2D version for now. F
+As our next step, we need the actual 'drawing' part of our application. We will be using a javascript "canvas" element to custom drawing. Canvases come in two flavors, 2D and 3D, but since we're just creating a drawing App, we will stick to the 2D version for now.
For the HTML, we just need to have a plain 'canvas' element on a blank page.
{% highlight html %}