Honk, Honk! Automating the Counting of Car Horns to Tell a Story About Road Safety | Field Test by David S. Joachim

The honking is driving me crazy. When we moved into our home in Pelham, N.Y., two years ago, we knew we were choosing a busy street — Boston Post Road, a part of U.S. Route 1, which traces the path of the colonial-era postal route from New York to Boston. The street has a double-yellow line and is a conduit between two major highways, the Hutchinson River Parkway and Interstate 95.

Soon after we moved in, though, we noticed a problem. The road pattern on our side of the street, traveling eastbound from the Hutch to I95, narrows from two lanes to one lane just before traffic reaches my house (see oncoming traffic ⬆️ and map ⬇️). The speed limit is 30 miles per hour, but that doesn’t stop motorists from speeding up to battle for the lead position as they approach the merge into one lane. This dangerous game of chicken often results in near-misses, punctuated by drivers leaning on their horns. It’s quite jarring when it happens, I can tell you.

BostonPost.jpg

My local government, the Village of Pelham Manor, was responsive after I called attention to the problem. Local officials added signs warning eastbound motorists of the approaching merge. That has helped, but the problem persists. We still hear close calls all the time.

Village records show no accidents at that intersection, so the officials say they can’t justify further intervention. The trouble is, I haven’t been able to prove how treacherous that stretch of road is. My evidence is purely anecdotal. I’d love quantitative proof, but who has the time to sit by the road for hours at a time, listening for car horns and tallying the honks?

A computer does, that’s who! I got to wondering: What if I could assemble a few circuits and write some code that would listen for instances of honking and keep a running count with time stamps? What if I equipped this box with its own storage media and power supply and made it small enough to affix to a utility pole or other permanent object by the side of the road? What if I made several of these sound sensors and placed them at multiple locations along Boston Post Road, to test whether near-accidents are truly more frequent in front of my house than they are down the road from me, as I suspect?

My hypothesis is that this sound sensor could be programmed to distinguish a car horn from other loud noises and that the resulting data would tell a story about the potential danger of a traffic pattern before tragedy strikes.

Equipment

The hardware components prove to be rather simple, inexpensive and adaptable, but I didn’t know that at the beginning. I didn’t even know where to start, so I consulted several people with varying levels of expertise. They included Rick Lehrbaum, a longtime technology journalist and the founder of a website called DeviceGuru; Jason Hillman, a nephew with an engineering degree; SparkFun Electronics customer support; and Professor Dan Pacheco of Syracuse University, the instructor for this course. After cross-checking their advice and reading about (somewhat) similar do-it-yourself projects, I purchased the following equipment from SparkFun ⬇️:

Unknown.png

The Arduino Uno is the brains of the operation. It’s a flexible microcontroller board that can be expanded with various input devices (temperature, moisture and sound sensors, for example) and output devices (LED lights, speakers and storage cards, for instance). Code, in the form of “sketches,” is written using open-source software running on a standard computer and then uploaded to the Arduino Uno over a USB cable. The sketches give the Arduino Uno and the add-on devices their instructions. The Uno is powered by the USB cable from a computer or by a 9-volt battery.

A breadboard is used to create prototypes, or temporary circuits, without having to solder. The existence of this product was welcome news to me.

The SparkFun Sound Detector is a simple microphone with three outputs. “With headers” means it comes with pins attached to add to the breadboard, again to avoid soldering. Thank goodness.

The Qwiic OpenLog is a data-storage device that uses microSD cards. The Qwiic cable connects the OpenLog device to the Arduino Uno. The 9V battery holder is meant to provide a portable power supply to my car-horn sensor. The jumper wires are used to create circuits between the Arduino Uno and the add-on “shields” for electric current and data transfer. And the resistors add resistance to the electrical current, to slow it down in small increments.

I decided to wait before buying a smaller breadboard to miniaturize the device and a case to protect it from the elements. First I had to see if I could make a prototype do something. Anything.

Assembly

My first challenge was that I knew nothing about building circuits. Zero. I must have been absent the day they taught that in elementary science class. After reviewing several dozen schematics to try to understand the basics of delivering power to the breadboard, and after hours of missteps, frantic text messages and curse words, I finally managed to run power to the breadboard using a 9-volt battery. This felt miraculous. The LED light on the sound detector is responding to my voice here. ⬇️

1.jpg

Then, to simplify things, I tried to run power to the breadboard using the Arduino as a daisy-chained intermediary, and that worked too. The Arduino is powered by my laptop via USB here. ⬇️

2.jpg

Next I needed to connect the Qwiic OpenLog microSD device to the system (it’s the small device in the bottom-right corner with the LED light activated ⬇️). I was able to get power to it, but because the online instructions didn’t match the printing on the device, I had to guess about which wires to connect for the data input and output. The instructions had the TX connected to the RX, and the RX connected to the TX, indicating to me that those were the input and output wires for the data collection. I moved on, comfortable that I’d have a chance later to figure out which was which. A mistake like that wouldn’t be fatal in the same way that mixing up the positive and negative power wires might be.

3.jpg

Software

The next step was to get acquainted with the concept of “sketches,” which are the mini-programs that tell the hardware what to do. They’re open source and created inside the Arduino Integrated Development Environment (IDE). Eventually I’ll need the equipment to perform a series of operations — listen for sound, identify a car horn and record the honk to the storage device.  But first, baby steps.

To my frustration, even the simplest built-in sketches (like the “Blink” program that lights up an LED for defined increments of time ⬇️wouldn’t load onto the Arduino Uno. The code was compiled correctly, as was confirmed by the Arduino IDE’s code-checker, but for some reason it couldn’t be sent to the board. The error message wasn’t helpful. I reinstalled the Arduino IDE software, thinking that maybe the directory paths got messed up when I moved folders around, but still nothing.

Screen Shot 2018-09-24 at 7.55.58 PM.png

Poking around an Arduino forum, I learned that a preference setting called “Show verbose output during: upload” wasn’t checked by default in this software but needed to be checked to prevent such an error. So I checked it. That cleared up some of the problem — I could now see the port that the board was connected to, but the Arduino still wouldn’t accept my sketches.

To troubleshoot further, I took out my wife’s laptop and duplicated the setup I had with my laptop. Success! The board was visible on the serial port, and the code for the “Blink” sketch uploaded successfully to the board, causing the LED indicator light to blink for one second, turn off for one second, and then repeat.

Now that I was able to upload sketches successfully, it was time to see what I could make the sound detector do. On the advice of Professor Pacheco, I read a healthy chunk of a book called “Arduino Workshop: A Hands-On Introduction.” This crash course gave me the coding basics and helped me understand the syntax of sketches. With this foundation, I could analyze pieces of code on public forums with the hope of finding the right combination of instructions to make my project work.

I then went back to basics by consulting the SparkFun Sound Detector Hookup Guide, which instructed me reconfigure my wiring:

(Sound Detector → Arduino)

  • GND → Supply Ground
  • VCC → Power supply voltage between 3.5 and 5.5 Volts
  • Gate → Pin 2
  • Envelope → A0

Then I loaded the sample sketch on that page, which is meant to demonstrate two sound detector modes: The gate output, a binary indicator, registers “high” when sound is present and “low” when it detects no sound, with the “high” condition triggering an LED to light up; and the envelope output, which measures the amplitude of a sound and then sends the numerical level to a serial monitor that is part of the Arduino IDE software.

Ultimately I’ll want the envelope to deliver information about the intensity of sound to my microSD card rather than light up an LED. But for now the on-board serial monitor software is sufficient to collect and present the output, because my laptop is connected to and providing power to the board.

So that code would hopefully take care of the input side of things. Now I needed to replace the output instructions in my sketch so that, rather than merely light up an LED, the data about the intensity of sudden sounds would be sent to the serial monitor.

I found some code that seemed suited for that operation on a site called TheoryCircuit, on a project page for a device designed to “amplify the sounds of door knocks, claps, voice or any other sounds loud enough to [be] picked up” by the SparkFun Sound Detector. After pasting the output portion of that sketch into my sketch and then making some fixes to the code so that it wouldn’t conflict with my existing code, I was able to get the serial monitor to report instances of loud sounds. When I clapped near the microphone, the serial monitor would read out, “Knock Knock.” ⬇️

knock.png

Next I found a project on GitHub in which a developer created a decibel-level meter with a numerical output to an LCD screen. I modified her sketch to send a time-stamped notification to the serial monitor whenever the microphone picked up a noise above 80 decibels. That’s the low end of the intensity range for car horns; they can go as high as 110 dB at close proximity. The modified sketch seemed to work. ⬇️

db.png

I found other promising sketches, too, like one entitled “Measure Sound/Noise Level in dB with Microphone and Arduino.” But this one and others like it required additional hardware, in this case a “low voltage audio power amplifier.” It’s possible that I do need such a device to make up for the low power of what one commenter called a “cheap little microphone.” That chipset is sold by Texas Instruments for just 80 cents. But I didn’t have time to complicate the project, so I moved on.

In fact, at this point, with time running out, I made the executive decision to simplify the project further by setting aside the OpenLog microSD component, relying instead on the serial monitor to present the output on my laptop.

Now I was almost ready to take my prototype outside to the street. First, I conducted a few tests indoors by clapping and screaming close to the microphone. Each sound burst was reported successfully in the serial monitor.

So it was time to go outside. ⬇️

DJsidewalk.jpg

Experiment

As soon as I set myself up on the sidewalk and started the initial test, I knew something was wrong. The serial monitor was going off constantly. I turned it up to 90 dB, in an attempt to decrease the microphone’s sensitivity, and it still kept going off. Then I tried 100 dB and even 150 dB. Same problem. ⬇️

Screen Shot 2018-10-02 at 8.00.20 PM.png

The most suspicious thing about the number pattern, to me, was that the output readings weren’t fluctuating, whereas the noise on Boston Post Road fluctuates widely. ⬇️

Even back inside the house, the serial monitor was scrolling like a gas pump.

As I was rebooting, I noticed that the ground wire into the breadboard had come loose, which might have explained the malfunctioning code. Upon reboot, the mic was much less sensitive, logging only my claps but not any ambient sound.

I went back outside with settings at 80 dB, this time on my porch steps, about 40 feet from the road, rather than on the sidewalk ⬇️For one thing, I wanted to see just how sensitive the microphone was. For another, I had noticed during my earlier test that the local police were taking an interest in what I was doing, and at this stage I didn’t want to hassle.

IMG-6351.JPG

Now, ordinary ambient road noise wasn’t registering in the serial monitor. That was a good sign. I thought it made sense at that point to stick around to see what did register, and to await my first honking horn.

I clapped my hands to give myself a “start” timestamp of 16:49:32.997 p.m., and then I waited.

From there I manually recorded any loud noise I heard, giving a 1 to any sound that was picked up by the serial monitor, and a 0 to any loud sound that I heard but wasn’t picked up by the serial monitor.

For the next two hours or so, between around 5 p.m. and 7 p.m., here’s what I recorded manually:

Clap: 1
Multiple cars passing: 0
City bus: 0
City bus: 0
Heavy truck: 0
Double-long city bus: 0
Big diesel school bus: 0
Tricked-out Honda Civic with no muffler: 0
Utility truck: 0
Ridiculously loud double city bus: 0
Acorn hitting gutter directly above: 0
Giant Pepsi truck: 0
Double city bus on this side of street: 0
UPS 18-wheeler: 0
Short honk, opposite side of street: 0
Harley: 0
Screaming brat on sidewalk: 0
Loud bang as truck passed (triggered LED light but no tally): 0
Cawing bird of prey (I think): 0
Convertible with radio blaring: 0
Car transport carrying multiple vehicles (triggered LED light numerous times but no tally): 0
Tractor-trailer: 0
Loud UPS truck: 0

Frustrated, I texted my wife, who was heading home from a birthday party with my kids. I asked her to honk on my signal as she pulled up the driveway. And that’s what she did ⬇️.

IMG-6361.JPG

First honk: 0
Second honk: 0
Third honk: 0

Then she said, “Want me to honk again?”

I replied, “No, that’s OK.”

To my shock, my voice, at a moderately loud tone, set off the terminal monitor. From this I theorize that proximity matters a lot, and that it’s quite possible that an amplifier may, indeed, be needed to detect honks from dozens of feet away.

Working from the premise that proximity was important, I moved to my backyard for a more controlled test of whether the terminal monitor would report an instance of a honking horn from close range ⬇️.

IMG-6362.JPG

At this point, I figured that I would start again at 80 dB and then move up 10 dB each time to see where the sensitivity of the microphone kicked in. As it turned out, the 80 dB threshold worked: Two honks generated three hits. Then 90 dB, 100 dB and 110 dB also generated two or three hits each for two honks of about a second in duration.

So I decided to go up to 200 dB and then work my way down, to see if the microphone was merely picking up any loud sound or if it would filter out sounds that were too loud to be car horns. No hits at 200 dB, none at 190 dB or 180 dB or 170 dB or 160 dB. But then at 150 dB I got one hit for two honks. And then I got two hits for two honks at 140 dB ⬇️.

Screen Shot 2018-10-02 at 8.40.39 PM.png

Unconvinced by the findings of that test, a couple of days later I started again with another controlled test in the backyard. I figured I would start again at 80 dB and then go up 10 dB each time to see where the sensitivity kicked in. It turned out, 80 dB worked. It recorded three hits for two honks ⬇️.

80.png

Then 90 dB, 100 dB and 110 dB also generated two or three hits each for two honks each of about one second in duration.

So then I decided to go to 200 dB and work my way down. I recorded nothing at 200 dB, nothing at 190 or 180 or 170 or 160. But then at 150 dB I got one hit for two honks ⬇️.

150.png
And then I recorded two hits for two honks at 140 dB and lower ⬇️.
140.png

This series of field tests seemed to suggest that my improvised honk detector was at least able to listen for sounds, count each sound burst within a range of intensities and reject sound bursts outside of that range.

Was this the right range? Was the code really measuring decibels with any precision? I can’t be sure without further testing. But the system’s ability to isolate and memorialize sound bursts within a set range felt like a genuine success, and perhaps the beginning of something useful.

Conclusions

The concept of an electronic sensor that can listen for a certain kind of sound event and collect data about that event is a commercial reality — witness ShotSpotter Inc., the vendor whose gunshot detector listens for the acoustic signature of gunfire. It’s used by law enforcement across the country and was the basis of an investigative project by The Washington Post.

The commercial success of ShotSpotter, together with my small success in cobbling together a device that would listen for car horns, tells me that my approach may be viable.

But I suspect that I would need to calibrate the device to be much more precise in identifying the particular signature of a car horn while also accounting for differences among horn types.

This would mean adding capabilities to the device, such as an amplifier and a separate device to isolate frequency. Then I would want to fine-tune the code and the positioning of resistors to regulate the current. Through testing, testing and more testing, I’d seek to find the optimal combination of settings to minimize missed car horns as well as false positives. To be a viable product, it should either always work or at least have a very low, and predictable, failure rate.

Finally, to tell the story about the potential danger of the traffic pattern in front of my house, I’d want to miniaturize the device, attach a battery and microSD reader, and weatherproof it. This way it could operate independently for many hours at a time. Then I’d want to duplicate the system at least two more times, I think, so that I could run three of these devices concurrently at different sections of Boston Post Road in Pelham. One possible test would have the three devices listening for at least 12 hours, from around 6 a.m. to 6 p.m., to account for the morning and evening rushes.

Once the device is proved reliable, it could be applied to any number of traffic situations in all kinds of settings, for use by journalists to report on the potential hazards or by local governments to take steps to prevent accidents.▪️

Think the Last 10 Years Have Disrupted Journalism? Wait Till You See the Next 10

hologram-1506020_1920.png

Photo via Pixabay

I have a way of looking at time that may seem strange. For example, if you tell me that the web browser was introduced 25 years ago, I’ll be tempted to tell you that 25 years before that was 1968, the year of the first manned Apollo mission, the assassinations of the Rev. Dr. Martin Luther King Jr. and Robert Kennedy and the installation of the first automated teller machine. It’s my way of thinking about the passage of time, and the extent to which progress is accelerating or not. My wife makes fun of me.

So if I’m asked where digital media will be in 10 years — and how a journalist like me will have to adapt over that time — I look first to 10 years ago. Facebook had 100 million active monthly users at that time, compared with more than 2.2 billion today. The iPhone had just been introduced, ushering in the mobile-content era. Broadcast and cable TV advertising was still a much bigger business than digital, whereas today digital is bigger. The Rocky Mountain News, Seattle Post-Intelligencer and Pittsburgh Tribune-Review were still being published.

Those and other changes have felt jarring to me. People now experience most of their digital content (news and otherwise) on mobile devices rather than computers. That alone has changed how newsrooms think about content delivery.

By that measure, the next 10 years are sure to be even more revolutionary for the news business. Virtual reality, augmented reality, 360-degree video and other immersive technologies are maturing to the point that big newsrooms are producing exceptionally high-quality content using them.

Once I experienced immersive journalism, I wondered how long it would be before conventional screens go away. Screens make no sense long term. They give us headaches and neck aches and eye problems and repetitive-strain injury. They’re also passive. Once we’re able to throw a hologram into our living room and experience a story — fiction or nonfiction — as a virtual participant, won’t a flatscreen monitor seem quaint?

The same goes for work. Instead of interacting with computer mice and monitors and keyboards, we’ll have a tactile relationship with data, using motion and voice commands and artificial intelligence to leads us to the information that is most relevant to us in that moment.

Any company that delivers news and data for a living will be disrupted by this. In turn, anyone who does the kind of work I do — sifting through court documents and other public records and then crafting stories using those documents — will be handling and delivering information in whole new ways.

We go where our audience goes. Twenty years ago, that was the web. Ten years ago it was social media. Ten years from now, there’s a good chance that it’ll be VR and AR.

Enhancing Coverage of a Half-Marathon Charity Event Using Drone Video

Screen Shot 2018-09-08 at 10.49.58 AM.png

The Pelham (N.Y.) Half Marathon and 10K is an annual charity event, held over Thanksgiving weekend, that has grown from nine runners in 2011 to an expected 1,600 participants this year. Net proceeds from registration fees and sponsorships go to the Pelham Civic Association, which helps neighbors in need. I’m a proud member of this organization.

The 13-mile route covers nearly the entire town of Pelham, but local news media haven’t had a way to capture that sprawl. The promotional media, particularly the videos, are lively and inviting but are limited to what can fit in a frame from the ground.

An inexpensive drone with video capability would enhance this coverage significantly, giving viewers a sense of proportion and context. More than that, video from the sky might encourage a whole different type of engagement, with viewers trying to figure out where their houses and other landmarks are on the course.

There’s been some discussion on the local Facebook discussion groups about doing this. The hurdle, according to the discussions, isn’t cost but rather the FAA’s certification requirements. If I’m reading the aeronautical chart correctly on vfrmap.com, Pelham is within six nautical miles of LaGuardia Airport, requiring drone operators to contact the air tower for any flights above 500 feet. Commercial flights terminating at LaGuardia often use a flight pattern that takes them directly above the Town of Pelham.

Even so, drone photography of the event could be field-tested in advance by a licensed operator using a flight path below 500 feet. Once an ideal flight path is charted, the drone could be programmed to follow the same route during the event itself, as long as the operator has line-of-sight during the entire voyage, in keeping with government rules.

The hypothesis would be that aerial images could give Pelham residents a better sense of place and a compelling view of the race to complement the video and still photography from the ground.

Mind-Blowing: Using Brainwave Measurement for News Reporting

Screen Shot 2018-09-01 at 1.02.31 PM.png

We’ve all seen the brain-scan images that are supposed to tell us something about which parts of our brain are most and least stimulated during certain activities. Doctors use the technology, known as Electroencephalography, or EEG, to test for brain disorders like epilepsy or the extent of the trauma from a brain injury, according to the Mayo Clinic. EEG is also often used in attempts to better understand how certain stimuli, like video games, affect our brains.

Almost all of the news reporting involving EEG technology comes from studies performed by academic or medical researchers in a lab setting. But what if journalists could conduct their own brain-activity studies pegged to the news?

Enter the MindWave Mobile 2 from a company called NeuroSky Inc. It looks like a phone headset, but it measures the alpha and beta waves emanating from the brain and feeds that data to a mobile phone app.

The marketing materials for the product say that it can “monitor your levels of attention and relaxation” in response to music or other prompts. In addition to 100 or so brain-training and educational apps, there are free developer tools to allow users to devise their own uses for the product.

The MindWave Mobile 2 could be useful as a way to crowdsource a study about the effects of social media on our brains. A reporter could seek volunteer readers to wear the headset during certain hours a day for a certain amount of time.

To capture a before-and-after picture of the data, the reporter could have two groups wear the headset during defined periods for a month. Neither group would use social media during that month. Then Group 1 could start using social media again, while Group 2 continues to abstain. Then a comparison of attentiveness and other qualities could be compared between the two groups.

This technology is highly imperfect, and its data shouldn’t be seen as conclusive. But on a micro level, it could be used to inform the policies of school districts. For example, if my district were proposing to do away with physical books in favor of digital books, I might want to conduct a small study on local schoolchildren, comparing the brainwaves of those using paper books vs. e-books. The results might influence the local debate over whether the additional screen time is healthy or not for that community.

Beep Beeeeeeeep!!! Using the SparkFun Sound Detector to Test a Traffic Theory

unnamed.jpg

I haven’t been in local journalism since the mid-1990s, when I left Newsday to join a national tech magazine. Ever since, my jobs have been national or international in scope.

But for my field test for Emerging Media Platforms, I’m planning to go local. HYPERlocal.

Like, my house local.

My family and I moved into this house in Pelham, N.Y., two years ago. We knew we were choosing a busy street — Boston Post Road, also known as Route 1, which traces the path of the colonial postal route from New York to Boston. It has a double-yellow line and is a conduit between two major highways, the Hutchinson River Parkway and Interstate 95.

Soon after we moved in, we noticed a problem. The traffic on the south side (our side) of the road heading east from the Hutch to I95 narrows from two lanes to one lane just before traffic reaches my house. Even though the speed limit is 30 miles per hour, motorists often battle for the lead position as they approach the merge into one lane. This dangerous game of chicken often results in near-misses, punctuated by drivers leaning on their horns.

Soon after we moved in, I petitioned the Pelham Manor village board to address the problem. The village responded by putting up a temporary speed sign showing motorists how fast they were going, and that slowed down traffic a bit. Then the village installed a sign about a block short of my house, warning drivers that a merge was approaching.

This has helped, but we still hear close calls on a regular basis. The village is reluctant to take any additional action, citing the fact that no accidents have been reported at that section of road.

I call BS. To me, it’s only a matter of time before someone gets killed or seriously hurt along this half-block stretch of road. We still hear a chorus of car horns several times a day, many of which sound like close calls. But until now we didn’t know of a way to prove that so many close calls were occurring, short of sitting on our stoop for hours on end, keeping a log.

Now I’m hoping to use the SparkFun Sound Detector to strengthen my case that more intervention is needed to mitigate the danger in this section of Boston Post Road. This device measures the amplitude of sound waves. One reviewer confirmed that the device could be set to measure noise above a certain decibel level.

Car horns have a unique signature, both in terms of decibel level (110 decibels at 10 meters) and other qualities like duration. My thesis is that I could set up several of these sound detectors along my stretch of Boston Post Road, with settings to specifically pick up car horns and perhaps even multiple car horns blowing at the same time. After a set period of time, to be determined, I would then compare the number and frequency of car horns blowing to illustrate just how treacherous (or not!) this section of Boston Post Road is compared with the other sections.

A preliminary review of the product’s instructions and reviews tells me that this citizen-data project can be accomplished at minimal cost, though the learning curve may be steep.

 

How It Feels to Be the ‘Enemy of the People’: A 360 Video Proposal

Screen Shot 2018-08-22 at 8.58.22 AM.png

Other presidents — all presidents, in fact — have grumbled aloud about their treatment by the journalists who cover them. But Donald Trump is different. For him, having the biggest microphone in the competition of ideas isn’t enough, so he sows doubt and mistrust about the people and institutions who report the truth about him. This has the (much-desired) effect of turning truth-telling into a battle of he-said, she-said. Who cares what the truth is, Trump seems to say. The more important question is, Who’s winning?

Calling us “fake” and the “enemy of the people,” Trump “has taken what has been a longtime Republican complaint about media bias in the mainstream and amped it up by a thousand,” the New York Times media columnist Jim Rutenberg has said.

This may explain why the news media are the bogeymen of virtually every Trump rally in his never-ending campaign. This was true in 2016, and it’s true now. Some of Trump’s most, uh, spirited supporters seem to interpret his words as an invitation to heckle, jeer or even intimidate journalists at these rallies. Some reporters on the receiving end have said they worry that the intense hostility could one day turn violent.

We’ve all seen video of some of these clashes — as a refresher, click on any of the links above. But I suspect that a 360 video of such an encounter might provide a truer sense of what it feels like to be verbally attacked while you’re trying to do your job.

An ideal field test would provide a 360 view of a minute or two of a Trump rally, from the perspective of the traveling press corps. Being immersed in the experience would allow a viewer to choose where to look — and where to listen. This field test could include the use of directional microphones, with one pointed at Trump at the lectern and another pointed at the hecklers. This would allow the volume of each to be raised and lowered depending on where the viewer is looking, providing a sense for the distraction created by the hecklers.

In the modern world of self-selected media, I don’t propose that an immersive experience like this one would change hearts and minds on any measurable scale. And I’ll readily admit that there are many classes of people more worthy of empathy than journalists. But I suspect there’s a sizable number of Americans who are on the fence about whom to believe, and who have begun to caricature or even dehumanize reporters in their minds. An immersive video that gives viewers the sense of being there might get through to some of these people.

A survey would seem to be the best way to measure the effectiveness of such a video experience. Respondents could be asked about their sentiments toward reporters and their work before and after experiencing the video. The results could be further parsed by sorting the results according to where viewers spent most of their time looking and listening.

News in 3D: Transporting Audiences Using Reality-Capture Technology

Screen Shot 2018-08-11 at 8.02.27 PM.pngThis reality-capture stuff is trippy. Using the Trnio app, I made a 3D model of a municipal garbage can on the corner of Lexington Avenue and East 57th Street in Manhattan, about a block from my office. And using the Game Avatar app, I made a quasi-3D model of my head. The app scanned only my face and filled in the rest, so it wasn’t really a 3D scan.

I could think of at least a few ways that this kind of reality-capture technology could help tell news stories. When a new bridge is being built, for example, one phase before construction is for municipal engineers or private contractors to create a miniature model. Because these are often scale models, they could be scanned and reproduced digitally so that viewers could get a glimpse of what the bridge might look like from their vantage point. The same could be done with new skyscrapers or sports stadiums.

Using aerial photography, virtual models could be made of virtually any location of interest: a war zone, a walled-off city like Pyongyang, an African jungle where elephants are slaughtered for their tusks, the site of the Olympic Games.

It seems clear that field testing the making of a virtual model from a miniature model would be easier and less costly than field tests involving flyovers. Some of that extra expense could be minimized using unmanned drones.

360 Video to Tell a Business Story? Let Me Count the Ways

adult-african-afro-1059115.jpg
Photo via Pexels

At this stage in my career I’m not about to do a 180 and specialize in 360. But as another potential tool in the proverbial journalistic toolbox, I can envision some ways that 360 video could enhance the coverage of business and finance.

One would be to better represent physical context. For example, a traditional photo or video of a commercial port or a big farm doesn’t do justice to their scale. Sure, the video camera can pan, but even then we’re only seeing a rectangle at a time. Immersing a viewer inside an Amazon warehouse, for example, would tell a much more complex story about Amazon’s innovations than would a traditional video.

In these cases, the viewer can look at whatever s/he wants in the scene. This is both good and bad from a content creator’s perspective, I suspect. On one hand, it’s a powerful new capability that we’re putting in the hands of our audiences. On the other, we can’t express journalistic value judgments as easily. Think about the tools we use in traditional video: editors use B roll to create a mood and direct the viewer’s attention to focal points. Video producers and editors relinquish that power to some degree with 360, it would seem.

I can think of several 360 story types that I’d want to see. We’ve already seen some uses of immersive video to bring greater attention to the effects of global warming. Another would be to show what it’s like inside a Chinese factory while it’s operating. Dangerous work conditions are hard to capture in words or rectangular images. What’s it like to work in a plant in Shenzhen? Just how difficult are the conditions? Even a short 360 video — taken surreptitiously, one presumes — would speak volumes.

Also, health care coverage could use more intimate portraits of people who must cope with chronic illness and uncertain finances. When these stories are told in print, they’re too easily forgotten. But placing viewers into a person’s life encourages empathy, and that’s the kind of information and experience that leads to changes in policy. Knowing about a problem is one thing; being made to feel it is quite another.

Other kinds of business stories for 360 could be classified as luxury porn — a tour of a Scottish mansion, a look around a private mega-yacht, a flyover of Pebble Beach Golf Links, a seat on a South African safari. How about a view from inside the ultra-exclusive World Economic Forum in Davos?

That stuff is not particularly serious, but luxury has long been an essential category of coverage, and advertising, for any business-oriented news organization. Real estate, both for news and for advertising, seems perfect for 360 video as well.

That Time My Kids Learned New Curse Words, Thanks to Unity

    Unity just kicked my ass. Professor Pacheco’s tutorial video had made it all look so easy. And for a moment here or there, it did seem pretty easy, and I was able to actually do some creating. But then the pinwheel would start spinning, the processor would churn and Unity would go BOOM.
    And the curse words would fly.
    The first installer downloaded just fine. But then when I tried to run the application after installation, Unity told me there was a newer version that I needed to download instead. Why didn’t Unity tell me that the first time? We’ll never know.
    In all, I had to download and install Unity no fewer than six times.
    Unfortunately I don’t have a lot to show for the roughly nine hours over two days that I spent trying to make something happen with Unity. Ultimately I was able to get to this point, but I couldn’t make my character move. All it could do was stand there and look at mossy hills.
Screen Shot 2018-07-31 at 7.02.14 PM.png
    I know, it’s not much. It’s not my best work, to be sure. But it’s all I got.
    You won’t believe me, but the earlier scenes — you know, the ones I lost? — were much more sophisticated. But each time, whenever I’d hit “Play” to navigate around my scene, Unity would tell me that I couldn’t use that mode until I fixed the “compiler errors,” whatever those were. Unity showed me a list of files containing the error, each with a gobbledygook name.
    How the hell do I fix those? Unity wasn’t telling.
    So I belted out more curses — introduced my kids to the really bad ones this time. I’m a terrible father.
    Then there was that whole Asset Store thing. Every time I downloaded the software, it would make a different set of assets available to me. The last download came with no assets at all. So what you see above came from the Asset Store, not from any preloaded assets.
    Overall, it was a terrifically humbling experience. I blame it mostly on my seven-year-old Mac, which is just fine for my purposes generally but just wasn’t up to this task. One day soon I hope to take another shot at Unity, with the right gear.

The News Innovator’s Dilemma: Automation Shows We’re Learning

place-name-sign-1647341_960_720.jpgThe news industry overall isn’t known for its forward thinking. We let a little bully named Craigslist steal our lunch money without putting up much fight. We resisted charging for access to our news, only to see people become accustomed to free. We even let “news aggregators” get away with virtual larceny and then claim more traffic than we had — for content that *we* had created.

One area where at least a few news organizations are confronting the Innovator’s Dilemma is in news automation. That’s computers writing the news. This has proven effective for producing simple stories using structured data — a ballgame recap, a corporate earnings story, a weather report.

The technology is rudimentary right now. Sometimes it even messes up simple stories. But the investment that news agencies like The AP and Bloomberg are putting into news automation shows that they’ve learned from past mistakes and are prepared to cannibalize at least part of what they do — because if they don’t do it, someone else will. Others are already, in fact.

Of course, the prospect of algorithms writing the news strikes cold fear into journalists of a certain vintage. Yes, some roles will be taken over by these algorithms. But there are so many other stories out there that require arms and legs and human judgment and emotion.

Let the computers have the boring stuff while we humans go after the Next Great Story.