Making Instagram video with Powerpoint

Audio slideshows are something I’ve included in my practical teaching for a little while. The combination of images and well recorded audio is, for me, a compelling form of content and it can be an easy video win for non-broadcast shops.

When I work with the students and journalists exploring the concept, I try and look for free or cheap solutions to the production process. In the past I’ve used everything from Windows Movie Maker to Youtube’s simple editor app to put packages together. But this year when I was putting the workshops together, I wanted to focus on social platforms and go native video on Instagram.

Video on Instagram

It’s not the first time I’ve looked at Instagram video. A few years ago, having seen a presentation about the BBC’s Instafax project (in 2014!), I had a look at cheap and free tools to use to create video for Instagram. But things have moved on — like the BBC’s use of Instagram.

So I started to look at how I might use the combination of accessible tools with a view to doing an update on that post. I found my self thinking about Powerpoint.

Why Powerpoint!

When I talk to students about video graphics, I often point them to presentation apps like Google Slides and Powerpoint as simple ways to create graphic files for their video packages. They have loads of fonts, shapes and editing tools in a format they are familiar with (more of them have made a powerpoint presentation than worked with video titling tool!). The standard widescreen templates are pretty much solid for most video editing packages, and you can export the single slides as images. So I took a quick look at Powerpoint to remind myself of the editing tools. Whilst I was playing around with export tools, I discovered that it had an export to video. So I opened up powerpoint to see how far I could go and about an hour later and some playing around and I had the video below.

I worked through the process on a Windows version of Powerpoint, but the basic steps are pretty much the same for a Mac. If you’re on a MAC then Keynote is also a good alternative which will do all of the stuff you can do with powerpoint but with the added bonus that it will also handle video.

Here’s what I did. (You can download the Powerpoint file and have a look I’m making that available as CCZero)

You can see a video walk-through of parts of the process or scroll down for more details.

The process

  • Open Powerpoint and start with a basic template
  • Click the Design Tab and then select Slide Size > Custom Slide Size(Page Setup on Mac)
  • Set the width and height to an equal size to give us the Square aspect ratio of Instagram. Click OK. Don’t worry about the scaling warning

You can set a custom slide size for Powerpoint which means we can create custom slides that fit with Instagram and other platforms.

You can now play around with the editing tools to place text, images and other elements on each slide.

Animating elements

The tools to add shapes and text are pretty straightforward, but one effect that seems popular is ‘typewriter’ style text, where the words animate onscreen. Luckily thats built in on Powerpoint.

  1. Add a Text box and enter the text. Make sure you have the text box selected not the text
  2. Go to the Animations tab, select the text box and click on Appear.
  3. Open the Animations Pane in the tool bar
  4. In the Animations pane right-click on the text box (it will be named with any text you’ve added) appropriate animation and select Effect Options
  5. In the Animate text select by word. You can speed the text up using the delay setting (Note. You can’t do this with the Mac version).

The typewriter effect is a common one on many social videos. One which powerpoint makes short work of.

For the rest, its worth experimenting with basic transitions and animations before you try anything too complex. Once you start to get separate elements moving around you’ll need to think about text as separate elements — you’ll end up with ‘layers’ of text; but that’s no different from a video editor.

Adding Audio

You can add audio to individual slides or to play as an audio ‘bed’ across all the slides.
You can add audio to individual slides or to play as an audio ‘bed’ across all the slides.

A common feature of Audio Slideshows on Instagram (and other social platforms) is that the text drives the story; the audio is often music or location sound that adds a feel for the story. In this example I used sound that I recorded on the scene but you could use any audio e.g. a music track.

You can also adjust the timing of slides to match the audio or just to give you control over the way slides transition and display.

Transitions and timing give you control over how long and how content appears

Exporting your video

Once you’re happy with your presentation you can create a vide version:

  • Click the File tab
  • Select Export > Create a Video

You have a few choices here. The quality setting allows you to scale the video. Presentation quality exports at 1080×1080; Internet quality 720×720 and Low Quality at 480×480. I went for Internet Quality as it kept the file size down without compromising the quality too much.

You can also set the video to use the timings you set up in each slide or to automatically assign a set time to each slide. Which one you pick will depend on the type of video you want to make.

Exporting to video is one of the default options in powerpoint. PC and Mac will save to MP4

Getting video on Instagram

Instagram has no browser interface for uploading. So once the video is exported, you’ll need to transfer the final file to your mobile device. I didn’t struggle emailing files around but you might want to look at alternatives like WeTransfer or GoogleDrive as a way of moving the files around from desktop to mobile device.

Beyond Instagram

It’s worth noting, even belatedly, that your video doesn’t have to be square. Instagram is happy with standard resolutions of video. You could use a standard 16×9 template and Instagram will be fine. I just wanted to be a bit more ‘native video’. But there is nothing stopping you setting up templates for Twitter video (W10cm X H5.6cm Landscape video) or Snapchat (W8.4 cm X H15cm — Portrait video).

Conclusions

There are limitations to using Powerpoint;

  • You need Powerpoint — It’s an obvious one, but I recognise that not everyone has access Office. That said. It can also be the only thing people do have! It’s a trade off.
  • Its not happy with video — If I embed a video into the presentation, Powerpoint won’t export that as part of the video. According to the help file there are codec issues. I haven’t experimented with windows native video formats which may help but it seems like a bit of a mess. It’s a shame. It will take an MP4 from an iphone and play it well. It will spit out an MP4 but it won’t mix the two! Those of you on a mac, this is the point to move to Keynote. Keynote is quite happy to include video.
  • Effects can get complicated — once you get beyond a few layers of texts then the process of animation can be tricky. In reality its no more or less tricky than layering titles in Premier Pro. The Animation Pane also makes this a little easier by giving you a timeline of sorts.
  • Audio can be a faff — The trick with anything other than background sound is timing. Knowing how long each slide needs to be to track with the audio can add another layer of planning that the timeline interface of an editing package makes more intuitive.
  • It’s all about timing — without a timeline, making sure your video runs to length is a pain. With the limitations of some platforms that could mean some trial and error to get the correct runtime.

But problems aside, once you’ve set up a presentation to work, I could see it easily being used as a template on which to build others. The slideshows are also pretty transferable as media is packaged up in the ppt file.

It’s not an ‘ideal’ solution but it was fun seeing just where you could take the package as an alternative platform for social video.

Don’t forget, you can download the PPT file I used and have a dig around (CCZero). Let me know if you find it useful.

Mapping Drone near misses in Google Earth*

My colleague Andrew Heaton from the Civic Drone Centre set me off on a little adventure with mapping tools when he showed me a spreadsheet of airprox reports involving drones.

In my head an airprox report describes what is often called a ‘near miss’ but more accurately, the UK Airprox board describe it as this…

An Airprox is a situation in which, in the opinion of a pilot or air traffic services personnel, the distance between aircraft as well as their relative positions and speed have been such that the safety of the aircraft involved may have been compromised.

The board produce very detailed reports (all in PDF!) on all events reported to them, not just drones, and they pack that all up in a very detailed spreadsheet each year. You can also get a sheet that has all reports from 200–2016! (h/t Owen Boswarva). If you look at those sheets and you just want drone reports look for ‘UAV’. There is also a very detailed interactive map of UK Airprox locations you can look through.

But given I’m on a bit of a spreadsheet/maps thing at the moment, I thought it would be fun to see if I could get the data from the spreadsheet into Google Earth . Why? Well, why not. But I did think it would be cool to be able to fly through the flight data!

Getting started.

The Airprox spreadsheet

At first glance the data from the Airprox board looks good. The first thing to do is tidy it up a bit. The bottom twenty or so rows are reports that have yet to go to the ‘board’. So the details on location are missing. I’ve just deleted them. Each log also got latitude and longitude data which means mapping should be easy with things like Google Maps. But a look over it shows the default lat and long units are not in the format I’d expected.

This sheet uses a kind of shorthand for Northings and Eastings. These are co-ordinates based on distance from the equator — the N you can see in the Latitude — and distance to the west and east of the Greenwich Meridian line, the W and the E you can see in the Longitude. To get it to work with stuff like Google maps and other off the shelf tools it would be more useful to have it in decimal co-ordinates eg. 51.323 and -2.134.

Converting the lat and long

This turned out to be not that straight forward. Although there are plenty of resources around to convert coordinate systems, the particular notation used here tripped me up a little. A bit of digging around including a very helpful spreadsheet and guide from the Ordnance Survey and some trial and error, sorted me out with a formula I could use in a spreadsheet.

Decimal coordinates = (((secs/60)+mins)+degrees)). 

If the Longitude is W then *-1 eg.(((secs/60)+mins)+degrees))*-1So to convert 5113N 00200W to decimal 

Latitude =((((00/60)+13)/60)+51) = 51.21666667
Longitude =((((00/60)+00)/60)+2)*-1 = -2

Running that formula through the spreadsheet gave me a set of co-ordinates in decimal form. To test it I ran them through Google Maps.

Getting off the ground.

Google maps is great but its a bit flat. Literally. The Airprox data also contain altitude information and that seems like an important part of the data to reflect in any visualization around things that fly!. That’s why Google Earth sprang to mind.

To get data to display in Google Earth you need to create KML files. At their most basic these are pretty simple. You can add a point to a map with a simple text editor and a basic few lines like the one below. Just save it with a KML extension e.g. map.kml

<?xml version="1.0" encoding="UTF-8"?> 
<kml xmlns="http://earth.google.com/kml/2.0"> 
<Document>
<Placemark> 
 <name>Here is the treasure</name> 
 <Point>
  <coordinates>
    -0.1246, 51.5007
  </coordinates>
 </Point>
</Placemark>
</Document> 
</kml>

Any KML files usually open in Google Earth by default and when it opens it should settle on something a bit like the shot below.

Google Earth jumps to the point defined in the KML file.

Adding some altitude to the point is pretty straight forward. The height, measured in meters is added as a third co-ordinate. You also need to set the altitudeMode of the point “which specifies a distance above the ground level, sea level, or sea floor” for the point

<?xml version="1.0" encoding="UTF-8"?> 
<kml xmlns="http://earth.google.com/kml/2.0"> 
<Document>
<Placemark> 
 <name>Here is the treasure</name> 
 <Point>
  <coordinates>
    -0.1246, 51.5007, 96 
  </coordinates>
   <altitudeMode>relativeToGround</altitudeMode>
 </Point>
</Placemark>
</Document> 
</kml>

The result looks something like this.

Setting the altitudeMode and setting an altitude co-ordinate gives your point a lift.

But hold your horses! There’s a problem.

The Altitude column in the Airprox sheet is not in Meters. Its in Feet.

When it comes to distances aviation guidance mixes its unit. Take this advice from the Civil Aviation Authority’s DroneCode as an example:

Make sure you can see your drone at all times and don’t fly higher than 400 feet

Always keep your drone away from aircraft, helicopters, airports and airfields

Use your common sense and fly safely; you could be prosecuted if you don’t.

Drones fitted with cameras must not be flown:

within 50 metres of people, vehicles, buildings or structures, over congested areas or large gatherings such as concerts and sports events

On the ground its meters but height is in Feet! So the altitude data in our sheet will need converting. Luckily Google sheets comes to the rescue with a simple formula:

=CONVERT(A1,"ft","m")

A1 = altitude in feet

Once we’ve sorted that out, we can look at creating a more complete XML file from a spreadsheet with more rows.

Creating a KML file from the spreadsheet

The process of creating a KML file from the Airprox data was threatening to become a mammoth session of cut-and-paste, typing in co-ordinates into a text editor. So anything that can automate the process would be great.

As a quick fix I got the spreadsheet to write the important bits of code using the =concatenate formula.

=CONCATENATE("<Placemark> <name>",A1,"</name><Point> <coordinates>", B1,",",C1,",",D1,"</coordinates <altitudeMode>absolute</altitudeMode> </Point> </Placemark>")

Where 
A1 = the text you want to appear as the marker
B1 = the longitude
C1 = the latitude
D1 = the altitude
The spreadsheet can do most of the coding for you using the =concatenate formula to build up the string (click the image to see the spreadsheet)

To finish the KML file, you select all the cells with the KML code in and then paste that into a text file with a standard text that makes up a KML header and footer.

<?xml version="1.0" encoding="UTF-8"?> 
<kml xmlns="http://earth.google.com/kml/2.0"> 
<Document>

paste the code from the cells here.

</Document> 
</kml>

Your file will look something like the code below. There’ll be a lot more of it and don’t worry about the formatting.

<?xml version="1.0" encoding="UTF-8"?> 
<kml xmlns="http://earth.google.com/kml/2.0"> 
<Document>
<Placemark> <name>Drone</name><Point> <coordinates>-2,51.2166667,91.44</coordinates> <altitudeMode>relativeToGround</altitudeMode> </Point> </Placemark><Placemark> <name>Drone</name><Point> <coordinates>-2.0166667,51.2333333,91.44</coordinates> <altitudeMode>relativeToGround</altitudeMode> </Point> </Placemark><Placemark> <name>Unknown</name><Point> <coordinates>-2.6833333,51.55,2133.6</coordinates> <altitudeMode>relativeToGround</altitudeMode> </Point> </Placemark><Placemark> <name>Model Aircraft</name><Point> <coordinates>0.25,52.2,259.08</coordinates> <altitudeMode>relativeToGround</altitudeMode> </Point> </Placemark>
</Document> 
</kml>

The result of the file above looks something like this.

With a simple file you can add lots of points with quite a bit of detail.

Is it floating?

When we zoom in to a point it can be hard to tell if the marker is off the ground or not especially if we have no reference point like Big Ben! Luckily you can set the KML file to draw a line between the ground and the point to make it clearer. You need to set the <extrude> option by adding it to the point data:

<Placemark> <name>Unknown</name><Point> <coordinates>-2.6833333,51.55,2133.6</coordinates> <altitudeMode>relativeToGround</altitudeMode> <extrude>1</extrude></Point> </Placemark>

The result looks a little like this:

Wrapping up, some conclusions (and an admission)

There is more that we can do here to get our KML file really working for us; getting more data onto the map; maybe a different icon. But for now we have pretty solid mapping of the points and good framework from which to explore how we can tweak the file (and maybe the spreadsheet formula) to get more complex mapping.

Working it out raised some immediate points to ponder:

  • It was an interesting exercise but it started to push the limits of a spreadsheet. Ideally the conversion to KML (and some of the data work) would be better done with a script. But I’m trying to be a bit strict and keep any examples I try as simple as possible for people to have a go.
  • The data from the Airprox board is, erm, problematic. The data is good but it needs a clean and some standard units wouldn’t go a miss. It could also do with some clear licensing or terms of use on the site. I could be breaking all kinds of rules just writing this up.
  • The data doesn’t tell a story yet. There needs to be more data added and it needs to be seen in the context i.e the relationship to flight paths and other information.

And now the admission. I found a pretty immediate solution to this exercise in the shape of a website called Earth Point. It has a load of tools that make this whole process easier including an option to batch convert the odd lat/long notation. It also has a tool that will convert a spreadsheet into a KML file (with loads of options). The snag is that it does cost for a subscription to do batches of stuff. However Bill Clark at EarthPoint does offer free accounts for education and humanitarian use which is very nice of him.

So I used the Earthpoint tools to do a little more tweaking, with some pleasing(to me) results.

You can download the KML file and have a look yourself Let me know what you think and if you have a go.

Thanks to Andrew Heaton for advice and helpful navigation round the quirks of all things drones and aviation. If you have any interest in that area I can really recommend him and the work the CDC do.

*Yes, I’m pretty sure ‘near misses’ isn’t the right word but forgive me a little link bait.

Don’t postmortem journalism. It’s not dead. Fix it.

In the aftermath of the Trump win many in the media are looking inward to understand what went wrong. But is it too soon to write off journalism as a failed project?

In the very short time we’ve had to get used to the idea that Donald Trump will be the 45th President of The United States, the hand wringing for journalism has already started.

‘We did this’.

‘We didn’t see this coming’.

‘We trusted the data and not the people’.

‘We’ve lost touch with proper journalism’

There is no doubt that what we know as the modern media is breaking apart. The strands of a profession that hold it together, that define it, are impossibly stretched by digital fragmentation and an economy that now sells choice over balance. More than any recent events, even post-Brexit here in the UK, the industry seems shaken to its core by its lack of foresight. The simmering existential crisis that dogs journalism now risks becoming full blown, crippling self-doubt for those that find their powerful journalistic tools and practice are ineffectual.

The knives are not just out for a postmortem. Many in journalism are taking the opportunity to cut down some tall poppies. Data journalism is already the main target for the traditional journalists championing a return to ‘proper journalism — with all the self-righteous confidence of Trump supporter mandated by the win to call foul on liberal thinking.

But now is not the time to ‘fix journalism’.

Journalism — this election was not about you. In the next few weeks, we’ll need you to explain what’s happening. God knows what the repercussions of today will be. No one has a clue. That’s your job.

Don’t fill the airwaves with conversations about the role of the media. Don’t cram the pages of your papers with handwringing. This wasn’t a surprise . This was the outcome we couldn’t sell to ourselves.

We know what the lessons are.

Time to learn them by doing .

Why social media isn’t blogging.

I’m teaching first year journalism students at the moment and talking to them about a professional online presence. A phrase that I’ve been using a lot is blogging. The idea of a ‘blog’ and its value to an aspiring journalists is one I’m really comfortable with but I checked myself and wondered just what it might mean to the students.

As part of that, I had a look at Google Trends to see how the term blog was fairing. As I noted on Twitter:

If you read all of this post, the irony that I put this on Twitter before I wrote this post — before I blogged it — will not be lost. As many pointed out in the conversation around the tweet, by putting it on Twitter I was blogging. Maybe its the terminology that’s changed.

But for me there is something more about the idea of blogging; something more about what that term means.

There is a very mechanical element to the idea of a blog. At their heart is a mechanism by which anyone (with little more than the time to google your way through the set-up process) can set up a dedicated publishing platform for their content and share with people — the press tools Jay Rosen talked about. In this context, it’s easy to see how the idea of blogs can be subsumed into contemporary platforms and practice. Twitter and other Social media platforms do the same thing. Don’t they?

Blogger has also become a proper noun (beyond the google platform*). It’s a job title. It must be a proper job because we now differentiate between types of blogger — celebrity bloggers, fashion bloggers (its a kind of differential journalism). And to be frank, the amount of money many of them earn certainly qualifies it as ‘a living’

But, and I realise this is where I make this quite parochial and personal, in the journalism sphere, blogging has always meant more to me than simply the process.

Blogging as critical practice.

As digital disrupts, those in the industry who innovate, explore or just honestly talk about the challenges of the day-to-day, are pushed apart. Connections are lost. So the value of social media to hold together and sustain the communities of practice is immeasurable. But social media is prone to echo chambers and its hard for new voices to break in and disrupt the same old conversations. More fundamentally, social media has no collective memory. The mistakes, learning and context are lost in the stream of news. The echo chamber reverberates to a constant churn of the same questions popping up again and again.

Blogging, for me, was a way of setting that down — the collective wisdom of a community. A way for the community to archive its learning and insights. But more than that it was a way for us to share the working out not just the result — It was and continues to be a way for me to test my thoughts.

It also has been one of the key activities that has driven me to get enough profile that you’re reading this at all. It’s allowed me to build a presence alongside the chatter of social media. Something that underpins my transitory interactions with something more substantial (but maybe no less sensible!). An opportunity that is still there for aspiring journalists to grasp and exploit.

There isn’t the time, space or traction for that level of depth or reflection on social media.

So, as much as blogging may be becoming a bit of a legacy term, I still hold to my thought that “a blog is about the space to say why you think something in a world of people saying what they think in 140 chars or less.”

For me blogging was and still is a critical and thoughtful process.

*Just having to clarify that says something about the collective memory of social media)

Mapping street level crime in an area

A little while ago I was playing around with the API at data.police.uk looking at a way to pull the data into a google spreadsheet (and some of the issues around the way policing areas are constructed)

Yesterday I found myself playing with the API again and looking at quick and easy ways to pull data out based on a particular area.

Before I go any further I’d recommend that if you’re going to do anything with crime data from data.police.uk, you read the About pages for more information on what the data means and where the limitations are. 

Back to the project…

I know that the data.police.uk API can deliver street level crime reports based on a number of criteria including multiple latitude and longitude points that describe a shape.

https://data.police.uk/api/crimes-street/all-crime?poly=52.268,0.543:52.794,0.238:52.130,0.478&date=2013-01

I wondered how easy it would be to get the points of a custom polygon, like the one below, so I could get more specific data.

So I created a basic polygon using Google MyMaps and set about seeing if I could get the data out.

Making the shape

The easiest way to get at the data used to describe the polygons is by exporting the map as a KML file. In Google My Maps:

  1. In the left panel, click Menu (it looks like three dots on top of each other)
  2. Select Export as KML.
  3. You can choose the layer you want to export, or click Entire map. I just picked the layer with the Polygon on.
  4. Click Export.

Sorting out the lat and long points

The file that is exported is a text file so we can open up the file in any text editor and it will look something like this (I’ve just included the first part) and it’s those co-ordinates that I want to get at.

<?xml version='1.0' encoding='UTF-8'?>
<kml xmlns='http://www.opengis.net/kml/2.2'>
 <Document>
  <name>Crime Layer</name>
  <Placemark>
   <name>Crime area</name>
   <styleUrl>#poly-000000-1-77-nodesc</styleUrl>
   <Polygon>
    <outerBoundaryIs>
     <LinearRing>
      <tessellate>1</tessellate>
      <coordinates>-2.7231503,53.7637821,0.0 -2.7239227,53.763021,0.0 -2.720747,53.7586067,0.0 -2.7239227,53.7518067,0.0 -2.7229786,53.7493706,0.0 -2.7213478,53.7495229,0.0 -2.7176571,53.7501319,0.0 -2.715168,53.7485078,0.0 -2.7113915,53.7475942,0.0 -2.7094174,53.7476957,0.0 -2.7033234,53.7507917,0.0 -2.6967144,53.7516544,0.0 -2.6905346,53.7486093,0.0 -2.6857281,53.7488631,0.0 -2.6790333,53.7531769,0.0 -2.6811791,53.7566277,0.0 -2.6800633,53.7606363,0.0 -2.6809216,53.7612959,0.0 -2.6774883,53.7620063,0.0 -2.6780892,53.7630717,0.0 -2.6846123,53.7693626,0.0 -2.6918221,53.7693626,0.0 -2.7057266,53.7690583,0.0 -2.7167988,53.7671305,0.0 -2.7231503,53.7637821,0.0</coordinates>
     </LinearRing>
    </outerBoundaryIs>
   </Polygon>
  </Placemark>...

Sadly the co-ordinates are in the wrong format for data.police.uk;

  1. The lat and long are reversed
  2. The data.police.uk API wants each pair (lat and long that describes a point) separated by a colon (:)

So we are going to need to clean the data up a bit. You could take the data points and use various filters, formulas and other things (regex etc.)There’s plenty of ways we can do this but to be honest with such a small set of points I did it by hand.

The biggest issue is getting each pair on a new line. If you can do that then they should cut and paste into a spreadsheet and you can use the SPLIT command in Google Sheets to break the data down. Once you’ve got the Lat and long in adjacent columns then the CONCATENATE formula will help rebuild things in the right format and then the JOIN formula will shunt them back into one line.

The SPLIT formula can be used to separate lat and long using the comma as the delimiter (the thing you split on) Adding TRUE means it will split on consecutive commas

The CONCATENATE formula can be used to join the Lat and Long back together again in the right order, separated by a comma

Finally the JOIN formula helps shunt them all together on to one line, separated by the colon that data.police.uk wants for the API call. 

Some final cutting and pasting and I ended up with this URL to call the API

https://data.police.uk/api/crimes-street/all-crime?poly=53.7637821,-2.7226353:53.763021,-2.7234077:53.7586067,-2.720232:53.7518067,-2.7234077:53.7493706,-2.7224636:53.7495229,-2.7208328:53.7501319,-2.7171421:53.7485078,-2.714653:53.7475942,-2.7108765:53.7476957,-2.7089024:53.7507917,-2.7028084:53.7516544,-2.6961994:53.7486093,-2.6900196:53.7488631,-2.6852131:53.7531769,-2.6785183:53.7566277,-2.6806641:53.7606363,-2.6795483:53.7612959,-2.6804066:53.7620063,-2.6769733:53.7630717,-2.6775742:53.7693626,-2.6840973:53.7693626,-2.6913071:53.7690583,-2.7052116:53.7671305,-2.7162838:53.7637821,-2.7226353

Notice that there is no trailing : and I’ve left the date option off. That will give me any street level crime reports, in the area defined for the last month they have. Plug that URL into a new browser tab and you get a page full of JSON data:

[{"category":"anti-social-behaviour","location_type":"Force","location":{"latitude":"53.764959","street":{"id":863936,"name":"On or near Carrol Street"},"longitude":"-2.690727"},"context":"","outcome_status":null,"persistent_id":"725ed090a9eda01c7b53e2e474005e78077bb6e9521a600d90b8a10383fbd05e","id":50943777,"location_subtype":"","month":"2016-08"},{"category":"anti-social-behaviour","location_type":"Force","location":{"latitude":"53.762666","street":{"id":862106,"name":"On or near Driscoll Street"},"longitude":"-2.690796"},"context":"","outcome_status":null,"persistent_id":"463cc6c50d3d8464a4f05d1e9f9d9e18d2138d0ba4b3d843daba7419660ddbaf","id":50939501,"location_subtype":"","month":"2016-08"},

Pulling the data into a spreadsheet

There are lots of applications and scripts that can read the JSON output from the Police API. But I wanted to go with something that required minimal coding and could output something pretty easily so I pulled the data into a google spreadsheet using the importJSON script. Making the script work is dead easy thanks to Paul Gambill’s guide to How to import JSON data into Google Spreadsheets in less than 5 minutes.

Using the importJSON script we can use the data.police.uk api call to populate a spreadsheet. (you should be able to click the image and go through to the spreadsheet)

Visualizing the data

Now that we have the data as a spreadsheet we could start to do some analysis, filtering etc. But we can get a quick win by using the spreadsheet to drive a map.

I went back to the map I used to create the polygon shape, added a new layer and then imported my crime layer spreadsheet into the map. A bit of crunching later and each crime was mapped as a point.

Conclusions

The API isn’t perfect — the data isn’t as fresh as I would like and the geolocation isn’t always accurate (they do say this to be fair). Google maps also has its quirks especially when you’re dealing with lots of data points. But being able to export to KML is nice feature, not only for pulling out polygon data. If you have Google Earth on your computer you can open the KML file and fly around the crimes in your area!

Exporting your Google Map as KML data means you can pull the data into Google Earth and fly around the crime locations.
It’s clunky and no doubt there are more elegant solutions out there (please tell me if you know of them) but, a bit of messing with the format of the data aside, it worked how I thought it would; a process of ‘well I can do this, so if I can do that it should work’ way of piecing together the tools. As a quick and dirty visualization tool (and an exploration of what API’s can do), I think it works well. 

Let me know if you try it!

Note: The data from data.police.uk is made available under the Open Government Licence. That means you’re free to do pretty much anything with it but you must link back to the source where you can. 

Afterwards…

How open is open data journalism?

Simon Rogers published a post last week that asked “What does data journalism look like in 2016?”. For Rogers, the winners of the data journalism awards “give us a great sense of where the industry is right now.”

He’s right, the range and depth of the use of data is reassuring and the points Simon raises are well made and offer much food for thought.

But I did find myself getting snagged on one of his points: Open data is still vital.

The awards had a specific category for Open data:

Open data award. Using freedom of information and/or other levers to make crucial databases open and accessible for re-use and for creating data-based stories.

The language used here sits comfortably next to generally accepted definitions of open data. Here’s the definition of open data from http://opendefinition.org/ for example:

“Open data and content can be freely used, modified, and shared by anyone for any purpose

The Open Data Handbook definition is helpful in highlighting the sharing element:

Open data is data that can be freely used, re-used and redistributed by anyone — subject only, at most, to the requirement to attribute and sharealike.

The winner of the open data category, LA NACION DATA — OPEN DATA Journalism for change is, as Rogers notes in his post:

“a model of open data journalism and this year won the prize for its approach to opening up public datasets in a country with no FOI laws and a long history of limiting media access to government information.”

It does everything required of it by both the definition and the category description. A well deserved win.

Rogers also cites Excesses Unpunished, by Convoca in Peru which “opened up public data to help its users understand the country’s mining industry better.” The project is a media rich and superbly executed investigation and presentation; it pulls together multiple data sources and offers a deeply informative view making the issue and the information accessible. That’s different from open. And there is the snag.

By the definition of open data (and the category criteria) the Convoca report didn’t fully open up public data. Where is the data that means I can check the work or make my own stories? The data they have created isn’t open and accessible for re-use.

And there is the snag.

If you look at other entries the shortlist in the category, it’s a similar story.

THE EXPRESS TRIBUNE (Pakistan)— a nice piece of data driven investigation into the health issues caused by urban pollution that builds on existing research with solid reporting. Sadly the study by Khyber Teaching Hospital and Peshawar Traffic Police conducted isnt linked. Neither is the Nature report. VERDICT: CLOSED DATA

Trinity Mirror(UK) — a great piece of local journalism with a nice level of interaction. But the data is from a commercial supplier with paid for access to the original data. VERDICT: CLOSED DATA

Modern Investor magazine (UK) — A deep and focussed investigation into local government pension schemes that, for small team, packs a punch. The investigation done in part with data derived from hundreds of FOI requests has created a “unique database”…that isn’t open. VERDICT: CLOSED DATA

LeMonde (France) — A great piece of work, in particular their partnership with journalism students but where is the data? VERDICT: CLOSED DATA

It’s not all bad news though. The IndiaSpend (India) project is a great piece of sensor driven data journalism. I love it. But where is the data that drives the map? The umbrella IndiaSpend project does have a “data room” which shows a plan to make the data open VERDICT: OPEN (SUSPENDED)

For me, the only other shortlisted project on the list besides La Nacion, that makes the grade in terms of open is MWAZNA.(Egypt). Their attempt to ”explain and visualize government budget for everyone” is admirable and works well. Best of all, the data is available to download with clear liscence and in an open format. VERDICT: OPEN

Mwazna  Downloads
MWAZNA’s Budget in’s and out’s interactive links to the data which is clearly open. Exemplary stuff.
All but two of the projects on this list (three if we accept the direction of travel IndiaSpend are taking) actually make their data open. Remember, this is the shortlist not all entries. So these are deemed as open data by the judges.

So what’s the problem.

It’s fair to argue that resources and technology are an issue when it comes to making data open, they are. But Mwazna entered in the small newsroom category and LeMonde are clearly not short of resources in comparison. So you can’t say its size.

Privacy and data protection are also appropriate concerns I’ve heard voiced around opening up newsroom data — especially in a world where protecting sources and responsible use of data are often linked. This is a fair concern as far as it goes but as open data advocates are fond of telling government and other bodies, opening up data doesn’t have to mean all your data. If you have a dataset running a visualizations then that data set shouldn’t have data protection or privacy issues associated with it.

What is open data journalism?

I think the real problem is the use of the word open. As I have noted elsewhere, open is really about where do you put the pipe.

  • open| data journalism — data journalism done in an open way.
  • open data | journalism — journalism done with open data.

Either way, the shortlist reflects, at best, a patchy approach to both views.

There is an all too common confusion by journalists of the use of FOI to get data and open data. Using FOI is not open data. Its using a mechanism of open government to get data. Yes the data you get may well be delivered in an open way it may even be open data. But using FOI to “open up data” to do journalism and then not sharing the data you use is not open data or open journalism.

Open data journalism should be using open data, FOI’s or any other sources to collect data to tell a story and then sharing THAT data with your audience.

Does it matter?

Just to be very clear here. I’m not saying that any of the work here is bad journalism. So perhaps I’m being dogmatic or even a little pedantic about the use of the term open data. When there is clearly such good journalism going on shouldn’t we just get on with it? Well, maybe.

But if the practice of data journalism is to deliver on transparency and openness, then it needs to be part of the process. The data it has needs to be open and, especially when it judges itself, it needs to respect the full extent of what that means rather than simply adopting the phrase in such an uncritical way.

I think if journalism really started to embrace the broader meaning of open data, it would be better off for it.

The Panama Papers & trickle down journalism

I’ve been reading a lot about the Panama Papers.

As a ‘thing’, the Panama Paper’s is an amazing project. It’s pretty much written the textbook on how to run a 21st Century journalism investigation overnight. The networked nature, the secrecy all of those elements, the recognition of a global perspective, have been robustly tested over nearly two years of investigation. It’s massively valuable.

The involvement of the ICIJ has been a really interesting part for me. I’ve been watching the emergence of organisations like ProPublica (and, in some respects Wikileaks) for a while and the role of allied journalistic organisations has been fascinating to see. It goes beyond philanthropy and, to some extent, advocacy. The intermediary role of these organisations is a vital pivot point for pulling together investigations like this.

I’ve also been reading that this is the breakthrough for for data journalism.

If we see data journalism as a process — the mechanics of using data — then the Panama Papers is inarguably proof that modern investigative journalism needs data journalism skills.

But if you believe that data journalism reflects something more — a broad approach to journalism that is ‘new’ or different than the old then its a powerful hook on which to hang the view. I’ve certainly seen enough conversation to suggest that the Panama Papers represent a vindication of data journalism — the resignation of Sigmundur Davíð Gunnlaugsson has been used to invoke Watergate — the head on a spike that data journalism can do what ‘traditional journalism’ can do and bring down presidents.

The impact, especially for what it means for data journalism, has been measured and discussed in a quite rarefied way. It’s exciting for journalism insiders and the sheer scope of the story makes it ‘feel’ important — and yes. It is important.

But as the ‘story’ percolates into the national context it moves beyond the broad shock(or lack of it) the extent to which dictators, war criminals and others break the law to hide their ill gotten gains. In the UK a least, it’s fast become an ideological issue — people aren’t breaking the law but it is it right? — it has becomes political. In the academic sense it remains elite.

What impact it might have or the extent to which it will move further down the ‘accountability’ chain to a regional or local level is yet to be seen. Will we be seeing the impact of the Panama Papers at local council level? Maybe. But I do think there is a risk that the Panama Papers could end up a whole new form of trickle down journalism; the impact and benefits remain in the elite journalism sphere and don’t find their way down the chain*. Perhaps that’s more about the state of the channels for accountability further down the chain — there are less places for this stuff to trickle.

I’d hope the sheer weight and scale of the story would apply enough pressure to shift some of the blockages. Once the raw information starts to flow ( and I hope it will) and we can begin to look for more ‘local’ angles, then we will really see if the lessons learned as well as the story really will have the impact it deserves.

That’s where I also think data journalism as a broad concept rather than just a description of a mechanical process has the best opportunity to show its value. As much as the Panama Papers add to an enviable cannon of big wins for data journalism, there is a chance here to show the lessons can scale down as well as up.

*Just to be clear. I know there has been some criticism of the lack of transparency from organizations like Wikileaks that have been couched in these terms. I think the approach so far to not opening up all the ‘data’ has been sensible and appropriate. That said, I do think it is a bullet they are going to have to bite sooner rather than later.

MOJO on Android?

1-u7xRolwJ9fxh1Vu76liBiA
Me and the HTCOne. Getting to know each other

I’m spending some time with Android doing audio and video. I know, why would I do that when there’s the iPhone?

You get strong vibes from the community that Android is very unfriendly to ‪#‎mojo‬. I’m often suspicious of that kind of thing on the basis that’s it’s often over-familiarity with other platforms that makes for some entrenched thinking. But that assumption aside. Given that nearly 50% of my j-students are non-Apple it’s a concern for me that the response to #mojo issues is ‘wait for an upgrade and buy an iPhone’. This industry doesn’t do waiting very well.

Don’t get me wrong, I wrote this on an iPhone and will no doubt check in later on an iPad. I’m not having a downer on iOS or Mojo for that matter (although what’s this obsession with plugging things in and bolting stuff on? Seems a bit Freudian if you ask me!). It’s more that it feels like it’s becoming a singular platform phenomenon. In effect we have iMOJO but without theMOJroid to balance it out. It’s at risk of creating another exclusivity that we don’t need in journalism.

That said, and to be honest I have to say Android is proving hard to like. Some of it is me; that over familiarity with Apple thing. I’ve lost count of the times I’ve gone to click the home button for example. I know I’m out of a comfort zone. But some of it is just the strangeness of Android. As a user, it feels like an OS where all the parts are designed by different people and then bolted together…Oh. Wait…

Anyway, perhaps the best I can say at the moment is that it’s not going out of its way to win my support, but perhaps its not as actively difficult as people might say.

Let’s say we are at the ‘its just different’ stage.

I’m not advocating that we all make MOJroid work (or, God forbid, adopt the term) but maybe ‘mojo’ could work a little harder at alternatives, stretch the comfort zone a little. Perhaps different is OK. Supply and demand and all that.

I’ll keep you informed of my progress but in the meantime. Tried and tested so far (with the help of the amazing Mr @documentally) are:

  • Kinemaster: feature full but pricey video editor
  • Viva video: gimmicky in places but very workable editing and better price than KineMaster
  • Picmotion: good photo slideshow app that lets you record your own VO. Weird audio quality issue when using phone mic which can be bypassed by using hands free mic.
  • Cinema FV-5 lite. A nice app to open up the features of the video camera. More controls, better tweaking etc. Higher res video opptions(1280×960 and above) are in the pro version

UPDATE: These apps and others appear on Bernhard Lill’s excellent Thinglink for basic android mojo

Journalism Ethics in a digital world: In an nutshell

DIGITAL JOURNALISM ETHICS IN A NUTSHELL
For the last six years or so I’ve done a guest lecture on a colleague’s Journalism Ethics module around the title ethics in a digital world. All the lectures tend to be around the same theme – if you want to be treated as a journalist, you need to behave like one and that might be at odds with the way everyone else does it.

The slide above represents the lightbulb moment when I realised what that six years really boiled down to.

The detail is in what that means, who sets the behaviour etc.  is the lecture – these sitting through it might have preferred the slide! It’s also worth noting that my attempts to wrestle with the issue have resulted in a little of the devils-advocate/challenging ideas. This is presented in the same spirit.

So, here’s a bit of meat on the bones (not the whole lecture) of the slide for which I am massively indebted to Wil Wheaton for https://dontbeadickday.com/ 

Background

As I’ve researched the lectures, one thing thats become clear is that most conversation around journalism ethics conversations fall into one of two categories:

  1. Legal issues – most of what are considered ethical issues are actually legal issues. This seems especially common around the issue of comments and using user generated content and social media.
  2. Most ethics codes are about the process not of being fair and good but of not looking like a dick/idiot.

A good example of number 2 is the use of material from social media. A lot of ethical guidelines focus on the way you can verify images and multimedia from social media so that you don’t fall foul of hoaxes or people with agendas. Outsides of a, frankly academic, debated about the difference between professional ethics and, well, ethics,  I don’t see fact checking as an “ethical issue”. Integrity? Brand protection? Yes. Ethics? No.

Ethics asks you why you did something not how.

On being an idiot

It could be argued that no one wants to look like an idiot but experience has taught me that this very much depends on the audience and if nothing else, as journalists we play to an audience. The web and social media in particular have been instrumental in giving the audience a voice, but they have also raised the curtain on journalism and allowed all journalists, not just the chosen few a channel for their own voice and the audience that comes with the social and cultural capital journalism as a profession gives us. More chance to be seen and heard and more chance to be ‘unethical’.

Katy Hopkins: The Ethical Journalist?

You may look, for examples, at Katy Hopkins and think Idiot!*  But there’s a huge audience who think that she isn’t. So, she doesn’t care if you think she’s an idiot and neither do they.  You may, fully believe that you’re right and she isn’t, make this point clear to her and her followers (who by association you think are idiots too); feel good about it but reap the vitriolic whirlwind that follows.

If you did it because you genuinely think they are idiots and you need them to know that – well done! Stick to your guns and fight your corner and that’s an ethical decision.  Do it to get a few articles/retweets and follows out of it  – less so. Ethically you’re in danger of being as much of an idiot as they are.

Increasingly the underlying argument by many media ethics people (interpreting journalists actions and responses)  is that in a digital world to be ‘ethical’, you have to ask yourself why you’re doing something not just rely on the existing structures around you – in many peoples eyes the underpinning principles of those structures (balance etc.) aren’t fit for purpose anymore.  You also need to be happy that you’ve thought deeply about it, but you’re also prepared to live with the consequences.

I realise that by this definition, it could be argued that Katy Hopkins is actually quite ethical. On two of my counts. Maybe.  But she falls foul of the last one. Because no matter how existential your own reasoning may be, as a human being (not a journalist) you still have a duty of care – we’re not Iain Duncan Smith!. So if what you do intentionally causes harm to others and you know it will, that’s not ethical.

Here’s a less contentious/cleaner version for your newsroom.

DIGITAL JOURNALISM ETHICS IN A NUTSHELL (1)

(*I’m using idiot and dick interchangeably here) 

Ernest Hemingway on Adblocking

Whilst doing my daily read around of various things, I found myself at Forbes’ website. I was greeted by the usual pithy ‘quote of the day’ pop-up but with an added element – it was asking me to turn off my adblocker. To be honest I didn’t even know I had one turned on.

Forbes Welcome

What struck me in this instance was not the message; I get the reasoning and I hear the for and against for this strategy. What got me, was the juxtaposition of the quote and the request.   Problem and solution all in one.