40
Jacob Nieveen [email protected] December 6, 2012 Don Cripps Electrical and Computer Engineering Department Utah State University Dr. Cripps, I have attached the final report for my senior design project. This report gives details regarding a new type of piano synthesizer. The novel aspect of this synthesizer is that it gives the user the ability to control the tone of the synthesized piano in a manner that is not seen in today's synthesis devices. It allows the user to choose tones from four distinct pianos, and any mixture of these pianos. Another important aspect of the project is the dual-purposing of the hardware involved. The hardware will also be used as part of a research project called Direct Input Digital Speech Synthesis (DIDSS). Using the hardware for the DIDSS project allows myself and the other researchers to focus our time and energy on more difficult and useful aspects of that project. I hope that you will find the contents of my report in order. I hope that as you read through it, you will take the time to contact me with any questions or comments. Thank you. Sincerely, Jacob Nieveen

Jacob Nieveen Nieveen [email protected] December 6, 2012 Don Cripps Electrical and Computer Engineering Department Utah State University Dr. Cripps, I have attached the

  • Upload
    vucong

  • View
    214

  • Download
    1

Embed Size (px)

Citation preview

Jacob Nieveen [email protected] December 6, 2012 Don Cripps Electrical and Computer Engineering Department Utah State University Dr. Cripps, I have attached the final report for my senior design project. This report gives details regarding a new type of piano synthesizer. The novel aspect of this synthesizer is that it gives the user the ability to control the tone of the synthesized piano in a manner that is not seen in today's synthesis devices. It allows the user to choose tones from four distinct pianos, and any mixture of these pianos. Another important aspect of the project is the dual-purposing of the hardware involved. The hardware will also be used as part of a research project called Direct Input Digital Speech Synthesis (DIDSS). Using the hardware for the DIDSS project allows myself and the other researchers to focus our time and energy on more difficult and useful aspects of that project. I hope that you will find the contents of my report in order. I hope that as you read through it, you will take the time to contact me with any questions or comments. Thank you. Sincerely, Jacob Nieveen

Senior Design Final Report

Variable Tone Piano Synthesizer

December 6, 2012

Jacob Nieveen

Instructor Approval _______________________________________ _________________ Donald Cripps Date Electrical and Computer Engineering Department Utah State University

i

Abstract

The Variable Tone Piano Synthesizer (VTPS) project was created to provide musicians

with a new type of piano synthesizer capable of producing more unique tones than available

synthesizers on the market today. The VTPS allows a user to play three octaves of notes with

tones chosen from four base piano tones, as well as any mixture of these tones. Generally

musical keyboards ship with a few preset piano tones. The VTPS also has a few preset tones, but

can create thousands more at runtime.

The user also has the ability to control the sound of the attack – to synthesize sounds

corresponding to a hard key strike to a gentle tap on the keyboard.

The user controls the synthesizer with resistive pads and strips wired to a microcontroller.

This microcontroller communicates with a PC via USB, which gives the PC the necessary

information to synthesize the piano waveforms.

ii

Table of Contents 1 – Introduction and Background ........................................................................................................1 2 - Review of Conceptual and Preliminary Design ................................................................................2

2.1 - Problem Analysis ............................................................................................................................... 3 2.2 – Decision Analysis .............................................................................................................................. 6

2.2.1 - PC Platform Selection ................................................................................................................ 7

2.2.2 – USB Raw HID Usage ................................................................................................................... 7

2.2.3 – PortAudio Library ...................................................................................................................... 8

2.2.4 – Teensy Microcontroller ............................................................................................................. 9

2.2.5 – Synthesis Method...................................................................................................................... 9

3 - Basic Solution Description ........................................................................................................... 10 4 - Performance Optimization and Design of System Components ..................................................... 11

4.1- Microcontroller subsystem .............................................................................................................. 11 4.1.1 – User Interface .......................................................................................................................... 11

4.1.2 - Microcontroller Subsystem Hardware ..................................................................................... 13

4.1.3 – Microcontroller Subsystem Software ..................................................................................... 15

4.2 - PC Subsystem .................................................................................................................................. 16 4.2.1 - USB Interfacing ........................................................................................................................ 16

4.2.2 - Audio Interfacing ..................................................................................................................... 16

4.2.3 - Signal Processing ...................................................................................................................... 17

5 - Project Implementation/Operation and Assessment .................................................................... 25 5.1 - Teensy and User Interface Hardware Testing ................................................................................. 25 5.2 – PortAudio testing ........................................................................................................................... 26 5.3 – Signal Processing Testing ............................................................................................................... 27 5.4 – Full System Testing ......................................................................................................................... 28

6 – Final Scope of Work Statement ................................................................................................... 28 7 - Other Issues ................................................................................................................................ 30 8 - Cost Estimate .............................................................................................................................. 31 9 - Project Management Summary ................................................................................................... 32

9.1 - Tasks Completed ............................................................................................................................. 32 9.2 - Personnel ........................................................................................................................................ 33

10 - Conclusion ................................................................................................................................ 33

iii

List of Tables

Table 4.2 –Pianos and their weights ...........................................................................................22 Table 8.1 – Cost estimate of hardware .......................................................................................30

iv

List of Figures

Figure 1.1 – VTPS layout ..............................................................................................................1 Figure 2.1 – Top level overview ....................................................................................................4 Figure 4.1 – User interface layout ..............................................................................................11 Figure 4.2 – User interface hardware ........................................................................................13 Figure 4.3 – Pitch shifting ...........................................................................................................18 Figure 4.4 – Two separate periods from the same piano waveform ......................................19 Figure 4.5 – Synthesis of waveform from single string ............................................................20 Figure 4.6a – Reconstructed waveform ....................................................................................20 Figure 4.6b – Typical piano waveform .....................................................................................20 Figure 4.4 – Volume and attack pad and its effect on the waveform .....................................22 Figure 9.1 – Gantt chart .............................................................................................................31

1

1 – Introduction and Background

There are a large number of piano synthesizers on the market today. Generally these

synthesizers use the same layout as that of a piano, i.e., they have rows of black and white keys

corresponding to notes of different frequencies. This is a tried and true layout, which explains its

popularity. However, by using a different layout and using other types of user inputs, additional

features can be built into a synthesis device. This document gives details of a synthesizer with a

new user interface, the Variable Tone Piano Synthesizer (VTPS). This new interface varies

significantly from a standard piano layout.

The VTPS layout, shown in Figure 1.1, makes use of three resistive strips and two 2-d

pads.

Figure 1.1 – VTPS Layout

2

The strips will control pitch of output notes. There is one strips for each octave, and each

strip is divided into twelve subsections, one for each note in the octave.

One pad will control piano acoustics, allowing the user to choose the tone of the

synthesizer. This pad is the main point of interest in the VTPS. Most piano synthesizers on the

market today have a few pre-loaded piano tones, allowing the user to select from four of five

different types of pianos. The VTPS is different, it allows the user to select from literally

thousands of different piano tones.

The other pad will control volume and the strength of an “attack,” or the force with

which the synthesized piano string was hit. This feature allows for an extra measure of variability

in tone.

All these controls in conjuntion allow for many types of piano sounds. This variability

makes the VTPS a useful addition to the world of musical instruments, and as such, a marketable

item.

In addition to its value as a musical instrument, the VTPS has a similar layout to the Direct Input

Digital Speech Synthesis (DIDSS) device, a separate project being researched at USU. The hardware used

in the piano synthesizer and an early prototype of the DIDSS device are the same. As such, both projects

benefit from the work done on the piano synthesizer.

2 - Review of Conceptual and Preliminary Design

This section describes details regarding the problem the VTPS was built to solve, as well

as discusses reasons for the decisions that were made regarding design. It is subdivided into two

subsections: Problem Analysis and Decision Analysis.

3

2.1 - Problem Analysis

In order to allow a user to control the tone of a synthesized piano waveform, several

subsystems were interfaced. There are two major subsystems of the VTPS: the microcontroller

subsystem and a PC. The microcontroller subsystem has three major subsystems:

• The user interface

• Microcontroller and user interface hardware

• Software

The user interface subsystem deals primarily with the layout of the user controls and how

they are intended to function. The hardware subsystem includes all the hardware used in the user

interface, as well as the particular microcontroller used in this project. The software subsystem is

the code written for the microcontroller.

Like the microcontroller subsystem, the computer subsystem has three subsystems as well:

• USB interfacing

• Audio interfacing

• Signal processing

The USB subsystem communicates with the microcontroller via USB. The Audio interface

connects with the PC’s audio hardware, and the signal processing subsystems involves many

processing algorithms on the piano waveforms. There are signal processing algorithms

involved in both initialization of the VTPS system and during playback.

4

A top level overview of the VTPS system is shown as Figure 2.1.1.

Figure 2.1 – Top level overview

These systems are based on the following specifications:

• The user interface is laid out as shown in Figure 1.1

• The Teensy microcontroller (ATMEGA32U4) is used to interface between the user input

devices and USB

• The computer subsystem is based around a Windows PC

• The Audio hardware of the PC is accessed by using the PortAudio library

• Piano waveform synthesis is done by recreation of recorded waveforms, as discussed in

Section 3

5

In addition to the preceding specifications, the following specifications were determined

to ensure proper functionality of the synthesizer:

• The sampling rate of the synthesizer is 48 kHz

• The microcontroller system reads the user input every 27 ms

• The USB system transmits user data every 27 ms

• The audio system's buffer updates every 27 ms

• The user interface allows the user to control tone, volume, and attack strength of three

octaves of piano notes

The first constraint sets the sampling frequency at 48 kHz. This sampling rate is

frequently used in audio applications, and is considered a quality sampling rate for audio. This

constraint affects only the PC subsystem.

The next constraint forces the system to respond “smoothly” when the user input changes

suddenly. This prevents discernable lags in the responsiveness of the system. To do this, the user

interface system must communicate with the PC at least every 27 ms, and the audio system is

updated just as frequently.

Originally, I expected that this communication between the PC and microcontroller

should occur much more frequently. My original design constraint for the system was that the PC

and microcontroller should communicate at least once every 5 ms. I was unable to implement the

system to meet this constraint, possibly because of speed restrictions from the microcontroller.

Despite this loosening of the constraint, the delay between user input and synthesizer output is

not noticeable to human listeners. The 5 ms constraint was not necessary.

6

The final specification deals with the user interface. In order to allow the user control

over tone, volume, and attack strength of synthesized piano notes, three 1-d resistive strips and

two 2-d pads are used. Their functions as part of the user interface are explained in the following

paragraphs.

The three 1-d strips correspond to three octaves of notes, with the standard 12 semitones

(pitches of notes) in each. When one or more of these strips is pressed, a piano note will play the

pitch corresponding to the location of the finger on the strip. Low notes correspond to the left of

a strip, and high notes to the right. These strips give the basic functionality that would be

expected in any musical instrument. These strips are shown in the upper right corner of Figure

1.1.

More interesting functions of the VTPS are implemented using the 2-d pads. The first pad

in the upper left of Figure 1.1 controls volume and velocity of the simulated note attack. When a

pianist plays a piano, he or she can create different tones and volumes by striking the keys in

different ways and with differing velocities.

The second pad, shown in the bottom of Figure 1.1, allows the user to synthesize tones

from pianos with differing acoustics. Differences in piano construction lead to differences in the

tones of the notes that these pianos produce. The synthesized waveforms are be based on four

different pianos. Each corner of the second pad corresponds to one of these pianos. All the area

between the corners of the pad corresponds to a mixture of the tones of these pianos.

2.2 – Decision Analysis

I made many decisions while determining system specifications. The most noteworthy

decisions were the use of:

7

• A PC

• USB Raw HID

• The PortAudio audio library

• The Teensy microcontroller

• Synthesis by transitioning from period to period

These decisions will be discussed in detail in the following sections.

2.2.1 - PC Platform Selection

Possibly my most arbitrary design decision was to use a Windows PC as the platform. I

chose it because Windows is still the most common operating system, and most people have

access to a Windows PC.

However, there are alternatives to using a full-fledged computer, such as using a DSP

development board, a Raspberry Pi, or a tablet. The PC was chosen in part because the DIDSS

prototype, which uses the same user interface hardware as the proposed piano synthesizer, uses a

PC. The PC is a good choice because of its ease-of-use and my familiarity with it. Also, by using

my personal PC as the platform, there is no extra cost for platform hardware.

2.2.2 – USB Raw HID Usage

USB is by far the most common means of connecting a peripheral to a computer in use

today. It is used cross-platform and is well-supported in nearly any modern computer system.

Because of this, it was a natural choice for the VTPS project.

8

Communication between a microcontroller and a PC's serial port was considered, but

rejected, due to the antiquated nature of the serial port, and the slower speeds at which it runs. In

my initial thoughts about this project, I thought that a serial port may be easier to interface with

than USB. Upon discovery of the Teensy, all such thoughts were discounted due to the ease of

use of the Teensy with USB.

I also considered methods other than Raw HID for USB transfer, such as USB serial or

device-specific drivers. I chose Raw HID because of ease-of-use and because it has no important

drawbacks. As an added benefit, it is also highly portable. If I ever desire to port my project to

another platform, such as Linux or OS X, porting the USB capabilities will be simple.

2.2.3 – PortAudio Library

For much of the development of the piano synthesizer, I was using the Simple Fast

Multimedia Library (SFML) to interface with the audio hardware on my PC. SFML was

attractive because it seemed simple to use. However, as I tried to resolve audio issues I was

having with streaming through SFML, I realized that SFML has a major weakness in streaming.

There were many audio glitches and hiccups that I could not resolve easily (if at all), and I was

definitely pushing at the boundaries of SFML’s usefulness. One of the main problems I had with

SFML was that it required the streaming buffer size to be very large. I was being forced to make

a choice between having an unresponsive system (since the output did not update frequently

enough) or trying to make SFML behave better than was easily achievable.

Rather than try to get SFML to do something it was not designed to do well, I decided to

port my code to the PortAudio Library. This decision was made after realizing that reputable

programs, such as Audacity (an open source audio editing program) use PortAudio to stream. I

9

did not have any of the issues SFML was having while using PortAudio. PortAudio allowed me

to make the buffer size as large or as small as I wanted.

Another option I had was to use the Windows Audio API. However, it is a lower level

interface than PortAudio, and as such, has a much more complicated interface. It requires a much

deeper understanding of the audio hardware and Windows operating system. It is also more

powerful than either SFML or PortAudio. PortAudio was chosen over it because of its high-level

nature, ease of use, and because it had sufficient power for my needs. Using the Windows Audio

API would have added hours of overhead to my project.

2.2.4 – Teensy Microcontroller

There are several types of microcontrollers and development boards that allow for

relatively easy USB interfacing. The Teensy was chosen because it is inexpensive and has all

required functionality. Of all the pre-built, development type USB capable microcontrollers I

looked at, the Teensy was the simplest to use and most inexpensive to buy.

2.2.5 – Synthesis Method

As will be discussed in detail in section 3, I used a method of synthesis that creates piano

notes by transitioning through recorded piano waveforms. This synthesis method was chosen

because of its simplistic nature that results in fairly high quality piano tones.

This synthesis method is in contrast to a commonly used synthesis method called additive

synthesis. The idea behind additive synthesis is that any signal can be made by adding sine

waves with appropriate magnitudes and phases together. For certain instruments, such as organs,

this is an easy and good way to synthesize. Pianos, however, have highly variable and complex

10

waveforms, and as such additive synthesis becomes a much more difficult process to implement

well. I preferred my method over additive synthesis because it has the capability to produce more

natural sounding waveforms in the amount of time I allotted for this project.

3 - Basic Solution Description

The VTPS is divided into two subsystems, the microcontroller and PC subsystems, as

shown in Figure 2.1.1. The microcontroller subsystem has three subsystems as well: the user

interface, hardware, and software. The PC also has three subsystems: USB interface, Audio

interface, and Signal processing. This section gives a basic description of each of these

subsystems.

The user interface and hardware subsystems of the microcontroller system are closely

related. The user interface, consisting of three resistive strips and two 2-d resistive pads, are

connected to the ADC inputs of a teensy microcontroller. The user is able to control the volume,

tone, attack, and pitch of a piano note by using these inputs correctly.

The Teensy is programmed to check for user data from the inputs, record the data, and

send it over USB to the PC subsystem. It waits for a command from the PC before it transmits

the data.

The PC subsystem then reads the data by using the Teensy as a Raw Human Input Device

(HID) USB device. This is the same protocol that USB mice and keyboards use to communicate

with a PC. The data received via USB is then used to control various elements of the signal

processing, such as volume, pitch, attack strength, and tone. The resulting synthesized waveform

is then outputted via the audio subsystem of the PC system.

11

4 - Performance Optimization and Design of System Components

The two main subsystems of the VTPS, the microcontroller subsystem and the PC

subsystem are discussed in detail in this section.

4.1- Microcontroller subsystem This section discusses the user interface and hardware, as well as the software involved in

the microcontroller subsystem of the VTPS project.

4.1.1 – User Interface

12

The user interface consists of two 2-d resistive pads and 3 resistive strips. Figures 1.1

and 4.1 show the interface.

Figure 4.1 – User interface layout

The volume and attack pad give the user the ability to control the volume and strength of

attack of the output piano waveforms. The signal processing behind this is related in section

4.2.3.1. The tone pad allows the user to control the tone of the synthesizer by choosing from any

mixture of 4 different base piano tones. More information regarding how this is accomplished is

given in section 4.2.3.1. Each of the three strips, shown in the upper right corner of Figure 4.1

corresponds to an octave of notes, with the lowest octave on the bottom, and the highest octave

on the top. The red boxes inside each strips correspond to notes, with the lowest note, C, on the

far left and the highest note, B, on the far right. The user plays these notes by pressing at the

corresponding location on the strip.

13

4.1.2 - Microcontroller Subsystem Hardware

The resistive pads and SoftPot potentiometer strips are used as inputs on a Teensy

Development Board, a low-cost solution that uses an ATMEGA32U4 microcontroller to allow

for easy USB interfacing. Much of the capabilities needed for using the Teensy as a Raw Human

Interface Device (Raw HID) over USB are built into the device.

A voltage potential is applied across the resistive pads and SoftPots. Depending on the

location of pressure on these devices, a certain voltage will occur at the output pins of these

devices. By reading the voltage output, and converting it to a binary representation (using the

ADC), the location of the pressure on the device can be determined by the Teensy

microcontroller or the PC. Because of this, the potentiometer devices are connected to the ADC

inputs on the Teensy, as shown in Figure 4.2.

14

Figure 4.2 – User interface hardware

The labels TP1 and TP2 shown in the figure correspond to the two 2-d pads, and the

labels P1, P2, and P3 correspond to the resistive strips. As can be seen, the strips connect directly

to VCC and ground, with one output measurement pin connected to an ADC input on the

microcontroller. For these strips, the voltage at the ADC input at any moment of time reflects the

location of the user’s finger on the strip. This is a simple setup that gives great results.

Unfortunately, the 2-d pads require a more complicated system to measure the location of

pressure on them properly.

Each of the pads has four pins. By setting pins 1 and 3 as VCC and ground respectively,

the voltage measured at pin 2 corresponds to the location of the user’s finger on the x dimension

15

of the pad. Pin 4 has to float to get a good reading. By setting pins 2 and 4 as VCC and ground,

the location of the user’s finger in the y dimension can be found from the voltage read from pin

1. Both the x and y dimensions cannot be read at the same time. Because of this, I alternate the

voltages from pins 1 and 3 to 2 and 4 as necessary. This is the biggest lag in the microcontroller

subsystem.

4.1.3 – Microcontroller Subsystem Software

I programmed the Teensy to be used as a Raw HID device. This simplified interfacing

with a PC because most modern operating systems, including Windows, are programmed to be

able to use Raw HID devices without additional device drivers.

The Teensy microcontroller can be programmed in both C and a language that is similar

to the Arduino programming language. I used C as the programming language because the

makers of the Teensy board recommend using C when using the Raw HID capabilities of the

device. Open source example code, written in C, ships with the Teensy device. Much of the USB

and ADC capabilities used in this project are a modification of the example code for my own

purposes.

The software I wrote configures the Teensy as a Raw HID device and sets up a unique

device and manufacturer ID. These IDs are used by the PC to identify and communicate with the

Teensy. Once the Teensy is configured properly, it continuously polls the inputs from the ADCs

corresponding to the user inputs. It also waits for a command from the PC.

Upon receiving the read command from the PC, the Teensy will send the ADC data over

USB to the PC, and will then return to polling the ADC inputs. In this manner, it communicates

with the PC once every 27 ms.

16

4.2 - PC Subsystem

This section describes the USB interfacing, audio hardware interfacing, and signal

processing involved in implementing the PC subsystem.

4.2.1 - USB Interfacing

The PC will treat the user interface system as a Raw HID device, as discussed in sections

4.1.1 and 4.1.2. Raw HID will make the USB interfacing code highly portable to both Linux and

OS X because both operating systems are capable of using Raw HID devices. Should the need

ever arise, implementation on another operating system would not be difficult.

The USB subsystem initializes by searching for the manufacturer and device IDs. After

initialization, if the correct USB device is found, the PC will send a request for user input data to

the microcontroller. The microcontroller will send back a packet of user input data.

The PC requests user input data approximately once every 27 ms.

4.2.2 - Audio Interfacing

The PC uses the PortAudio libraries to interface with its sound hardware. PortAudio is

written in C and is very portable to platforms other than Windows, such as Linux and OS X. It is

a very good library for streaming.

The project’s streaming sampling rate was set to 48 KHz. An audio buffer of 1300

samples (27 ms of data) is loaded with float values between zero and one once every 27 ms.

17

Because PortAudio is a C library, cross-compilation with my C++ code proved

necessary. I used C++ for my main language because dynamic memory allocation is easier in

C++, and my project relies heavily upon it.

4.2.3 - Signal Processing

The signal processing relating to this project occurred in two phases: pre-runtime

preparation, and runtime processing.

4.2.3.1 - Pre-Runtime Signal Preparation

In order to create synthesized piano notes based on actual piano recordings, I recorded

three octaves of C notes from three separate pianos, and a grand piano synthesizer I had access

to. These recordings were obtained using condenser microphones and other audio equipment.

Because I was unable to record in a recording studio, there was a fair amount of noise in the

recordings. I did the best I could to remove as much noise from the environment and equipment

as I could. I could not prevent all of it, so once the pianos were recorded I applied a low pass

filter to the recordings to reduce noise in the waveforms.

For the pitches I recorded, most pianos (including all of those recorded except for the

synthesizer) use three strings to produce each note. These strings are tuned to approximately the

same frequency. However, it is impossible to tune these strings exactly to the same frequency.

Because I wanted to model only a single string at a time, I muted two of the strings for each note.

By using these recordings to model a single string, I can synthesize a three string sound in a later

step (see section 4.2.3.2). I am not sure that muting two of the strings was necessary, but I am

convinced that it did not cause any problems in my project.

18

I then applied a pitch shifting algorithm to each note to model other notes that are not C.

For instance, the note C# is one half-step above C. This means that the frequency for C# is the

frequency of C*2(1/12). By removing samples (downsampling) from the waveform of C, I was

able to shift the pitch upward. In the case of C to C#, I needed to remove 1-1/21/12 * 100% of the

samples. This means that I removed a sample approximately once every 18 samples.

In order to keep the length of the note constant, I computed the period length in samples

for each C note. Once I removed the same number of samples as there are in a period, I copied an

entire period to that section of waveform, so that the exact same period would occur twice in a

row. This does not create any audible problems in the pitch-shifted waveform, and in this manner

I was able to keep the length of the note approximately the same.

I also created pitch shifted versions of notes from higher octaves to lower octaves. I did

this for two reasons. The first is that I noticed during testing that the further away in pitch a note

was from the base C note, the more synthetic it sounded. By shifting from a higher note to a

lower note, I was able to prevent the highest notes in an octave from sounding too synthetic. I

also notice during testing that there was a major difference in tone from a B in one octave to a C

in the next octave. By creating these pitch shifted notes from higher notes to lower notes, I was

able to create a smooth transition from octave to octave. This transition was creating by mixing

properly scaled notes that were pitch shifted from below with properly scaled notes that were

pitch shifted from above.

In order to shift a C pitch lower, I added samples to the waveform. Instead of taking away

samples, I add them. Other than this, the process behind pitch shifting down is very similar to the

process described for shifting a pitch upward. Take, for example, C and a note I want to create

that is a half-step lower than it: B. To create the B note, I add a sample once every 18 samples.

19

To add these samples, the values of the samples directly preceding it and directly after it must be

taken into account to prevent audible buzzing or clicking when the note is played. By failing to

take these other samples into account, I would create large jumps between the values of the

samples. Those jumps would cause audible problems in the sound of the waveforms. To prevent

this, I set the value of the sample I wanted to add to ½ of the preceding sample plus ½ of the

sample directly after it.

Finally, I mixed the two sets of pitch-shifted notes. In order to make transitions between

octaves smooth, I applied different weight to each pitch-shifted waveform. For example, C# is

11/12 of the pitch-shifted version of the lower C and only 1/12 of the pitch-shifted version of the

higher C. At a half-step higher D is 10/12*100% from the lower C and 2/12*100% of the higher

C. E is 9/12*100% of the lower C and 3/12*100% of the higher C. This pattern continues all the

way up to B. Figure 4.2 illustrates an example of generating a C# from two piano notes an octave

apart. This figure uses small sections (10 ms) of the waveforms in question to illustrate the

overarching idea.

Figure 4.3 – Pitch shifting

Once all 36 piano waveforms (3 Cs and 33 pitch shifted waveforms) are obtained for

each octave, individual periods of the piano waveform are extracted from them and normalized.

These periods come from various locations within the entire waveform. I extracted one period

20

from every 16000 samples of waveform. This resulted in about 6 periods extracted from every

piano waveform. These periods are the basis for the VTPS’s waveform synthesis.

4.2.3.2 - Runtime Signal Processing

When the VTPS initializes, it begins by recreating the piano waveforms from the

individual periods I obtained. Waveforms representing those from a single string (as opposed to

three piano strings) are then recreated by transitioning from one period to another period slowly.

Figure 4.3 shows two separate periods, one from the beginning of a waveform, and one from

further on in the waveform.

Figure 4.4 – Two separate periods from the same piano waveform

Suppose I start with the period in the left of the figure and I want to transition to the

period on the right. The first period of the resulting waveform will be exactly the same as the

period on the left. The next period will be mostly the period on the left (say 99%), but will also

be mixed with a little bit (1%) of the period on the right. The next period will have even less of

the left period and even more of the right. This is repeated until the resulting waveform is made

entirely of the right period and none of the left.

21

Each resynthesized note is made by combining several of these transitions between

periods and is made to be exactly two seconds, or 96000 samples, long. An exponential decay

factor is applied to the entire waveform to model the decay of an actual piano note. Figure 4.4

shows a plot of the resulting waveform, with a zoom on a certain section.

Figure 4.5 – Synthesis of waveform from single string

22

After the entire waveform is created, it is filtered with a low-pass filter to remove any

audio impurities from the recreation process. The waveform is then mixed with two very slightly

pitch-shifted versions of itself to model three separate strings, vibrating at slightly different

frequencies. Figure 4.5 a shows a waveform after this mixing has occurred. Figure 4.5 b shows a

typical piano waveform for comparison.

Figure 4.6a – Reconstructed Waveform Figure 4.6b – Typical Piano Waveform

The reconstructed waveforms approximate the originals fairly well. After reconstruction,

these final reconstructed waveforms are stored in memory. An array of pointers holds the

pointers that point to the starting element of each note array. This array of pointers is 4x3x12,

holding twelve notes from three octaves, from 4 pianos.

Once the program is done creating the synthesized waveforms and initializing, it waits for

user input from the USB subsystem. The program will not output any audio unless one of the

resistive strips is touched. When one of the strips is touched, a note with a pitch corresponding to

a location on the strip will be played, as discussed in section 4.1.1. The PC determines which

note to play by applying a simple formula to the value obtained from the ADC. Because the

maximum value from the ADC is 1024, dividing the value from the ADC by 85.33 and rounding

upwards will give a value 1-12. This value is used in the array of pointers, along with the octave

23

information, to locate the proper pitch. How this indexing information is used in conjunction

with the index corresponding to a particular piano is discussed in a few paragraphs.

Because the strips are resistive, it is not possible to play more than one note from an

octave at a time. This is because the ADC cannot read more than one voltage at a time. However,

it is possible to play notes from 2 or 3 different octaves at a time. The program simply checks if

more than one strip is being pushed at the same time, and adds the waveforms corresponding to

notes of different octaves together if necessary.

The data from the tone pad is used to determine the tone that should be played. If the pad

has not been touched since initialization, it uses the default tone of just one piano. If the pad has

been touched, the tone of the note is determined by the location the pad was pressed last. The x

and y locations of a finger on the pad are determined from the ADC inputs and normalized, so

that the location has a value from zero to one. These locations are used to determine the weights

of the individual piano waveforms that are added to create the mixed tone waveform. Table 4.1

shows the weights of each individual piano, given the x and y locations on the tone pad.

Table 4.1- Pianos and their weights

Piano Weight

Piano 1 X*Y Piano 2 X*(1-Y) Piano 3 (1-X)*Y Piano 4 (1-X)*(1-Y)

The volume and attack pad also gives user data that controls signal processing. The y

location of this pad gives the overall volume of the output waveform. After the y location is

calculated, it simply multiplies the value of every sample before it is placed in the output buffer.

24

The x location of the pad controls a more interesting function of the VTPS. It controls the

strength of the synthesized attack by controlling the starting index of the waveform. Figure 4.6

illustrates this.

Figure 4.7 – Volume and attack pad and its effect of the waveform

The x location of the volume and attack pad is a float value between 0 and 1. This value

is multiplied by 48000 and cast as an int. The result of this multiplication is used as the new

starting index. By default, all notes start at index zero, until the volume/attack pad is pushed to

set a new value.

Because PortAudio’s buffer is not large enough to hold an entire piano waveform, my

program tracks the index of every note that is currently being played by the system. Every time

PortAudio’s buffer reloads, the index is updated. If the system detects a change in the user input

from the pitch strips, it will reset the index. If a user holds his or her finger in the same place on

one strip past the two seconds allotted for every note, the system will output zeros.

25

5 - Project Implementation/Operation and Assessment

The VTPS has been tested extensively. Each subsystem has been tested by itself previous

to integration to the whole system.

5.1 - Teensy and User Interface Hardware Testing

I wired the Teensy to the user interface hardware, and programmed it with the code

necessary to send and receive data via USB. I wrote a simple test program to determine whether

or not the Teensy was communicating properly, as well as if the ADC inputs were reading the

voltages from the pads and strips properly. After solving a few minor issues, I concluded that all

systems were working properly.

The most difficult element of USB interfacing with the Teensy was that USB interrupts

are extremely complicated, and in the case of the Teensy, are not well-documented. Initially I

planned to use timer interrupts and send the data to the PC every so often. This lead to other

difficulties, however, because the PC has a buffer where it stores USB signals it receives.

Because of this, it was fairly easy for the PC to become backlogged and read USB signals less

frequently than the Teensy sent them. If the PC had not read the data from the Teensy, when the

Teensy sent new data, it would get stuck behind the old date in the PC’s buffer. This meant that

the PC could easily become out of sync and start to use old data.

The easiest solution for this was to make the PC request data from the Teensy, and have

the Teensy continually poll for commands from the PC. Polling was much simpler to implement

26

than USB interrupts. Unfortunately, by using polling, I introduced a lag in the Teensy system,

because the Teensy also has to poll for ADC inputs.

5.2 – PortAudio testing

I tested PortAudio’s capabilities first by streaming a triangle wave using it. It performed

as expected. I then implemented it in the larger system, and after many modifications, found it to

stream well.

The biggest issue I had with PortAudio was that it initially crashed a lot. To determine

what the cause of the issue, I left out parts of my code and then ran it. When it worked without

crashing, I added back in small parts of my code until it started crashing again. In this manner, I

was able to determine that my USB code and the PortAudio code were fighting with each other

and causing my program to crash. I still do not know what the exact problem was, but it was

resolved when I changed the audio buffer type of PortAudio from integers to floats. I had been

using integers because that is what SFML used, but conversion to floats was simple enough and

resolved the issues.

Another issue that is not resolved is that some transitions between notes of the same

octave cause audible clicks in the output waveform. This clicks are obvious when playing notes

from the same octave in rapid succession. My first suspicion was that this is caused by large

jumps between sample values from one sample to the next in the output waveform, around the

time of a note transition. However, after looking carefully at the output waveforms, I believe

there may be another underlying problem with the system as well. The waveforms surrounding

note transitions have jumps that occur in places other than the transition itself. This issues

remains unresolved because of time constraints.

27

Another small issue is the buffer size I use to stream data. It is slightly larger than I

would like. This creates an almost imperceptible delay in the responsive of the system. This is

definitely not a serious issue, but is still barely perceptible, and given more time I would try to

make the buffer size smaller. This issue could be caused by any of three things: a PortAudio

library issue, my USB communication speed (Teensy speed constraints, as already discussed), or

the amount of time it takes to do signal processing on a buffer once the user input data is

received. I am not sure where the problem lies, but I do know that the audio sounds garbled at

buffer sizes much smaller than 1300 samples.

5.3 – Signal Processing Testing

All signal processing techniques were tested in MATLAB before I implemented them in

my system. In both cases, all techniques used give the results I expected (with the exception of

the click problem mentioned in section 5.2).

Despite having mostly positive results, the highest octave of piano notes sound very

synthetic. I believe that this is in part because I was only able to apply a pitch shift (as discussed

in section 4.2.3.1) to a lower C, and not from a higher one to obtain these notes. For the other

octaves, I was able to apply a pitch shift to both a lower and higher C and mix the waveforms

resulting from both pitch shifts.

It should also be noted that the recreation algorithm is not perfect. Five distinct periods

does not entirely define two seconds of a piano waveform. However, the synthesized piano

waveforms sound very much like pianos, and they are a good approximation of the original

sounds.

28

The most important feature of the VTPS, the interpolation between tones of different

pianos, seems to work well.

5.4 – Full System Testing

Once all of the subsystems were built, I tested them all together. During my first all

system testing phase, I was still using SFML as the streaming library. It was through this testing

that I determined SFML was inadequate for the project and began using PortAudio. Once

PortAudio was in place, I tested all functions of the synthesis device. During my tests, many

bugs were resolved. The program used to crash occasionally, but has not crashed at all in any

recent tests. I believe my system is functioning properly, except for the minor issues I have

mentioned in other sections.

6 – Final Scope of Work Statement

I have accomplished many things in the course of this project. The most important

accomplishments are the following:

• I created a new user interface that allows a user to control volume, tone, strength of

attack, and pitch of synthesized piano waveforms

• I interfaced several subsystems including input devices, a microcontroller, and PC audio

hardware using a USB interface

29

• I created a piano synthesis algorithm that synthesizes piano waveforms based around real

piano recordings

• I implemented a piano tone mixing algorithm

• I implemented an algorithm that allows a user to control the force of a synthesized piano

attack

Despite these accomplishments, there are many other modifications that could make the

project better. Some of these are:

• Fix the clicking issue between note changes

• Implement a full attack for each note, including mechanical noise of the hammer hitting

the strings

• Integrate the tone pad with a full keyboard to get the best of both interfaces

• Improve the highest octave piano tones

• Implement a sustain pedal and synthesize the effects of strings on each other

• Implement reverberation control

• Find pianos that sound more dissimilar to base the project on

• Build an app for tablets that has a similar user interface via the touch screen

• Build a DSP processor based system to use with the user interface in order to make the

project more portable

• Add more octaves of notes

• Integrate a dampening control to model the effect of the dampening pedal on a piano

• Implement a pitch bending control

30

• Use capacitive switches instead of the resistive strips, to allow for more than one note of

the same octave to be played at once

Some of these modifications – such as a implementing a full attack for each note – are still

underdeveloped in professional synthesizers.

Despite these many possible improvements, the VTPS project is successful. Pianos are an

extremely complicated instrument to synthesize (perhaps more complicated than any other

instrument – except for the human voice). By designing and building this project, I learned quite

a bit about pianos, how they function, and how to model them. I used a simple recreation from

periods synthesis model, but there are other synthesis models that I would like to try and

implement. I also learned some very useful USB and audio interfacing techniques.

In its current state, the VTPS is little more than a prototype or a proof of concept. It is my

belief that implementing the tone pad in conjunction with a full-sized keyboard would make this

device much more marketable. As far as I know, there are no devices on the market with the

same tone-changing capabilities. Another option that would make the product more accessible to

people is to market it an app.

If either of these changes were implemented, it could be sold to musicians and

keyboard/piano enthusiasts everywhere.

7 - Other Issues

During the course of building the project, a minor issue arose regarding the 2-d

touchpads. The touchpads used in this design were bought from Sparkfun under the product

name “DS touchpad.” The clips that Sparkfun sells to be used to breakout the touchpad’s wires

do not fit the connectors properly. I had to improvise and shove paper in one clip, under the wire,

31

and jam a toothpick in the other clip to get the contacts to connect properly. Those clips were a

genuine hassle, I would advise finding another touchpad manufacturer. There should be other

manufacturers making similar devices, unfortunately, I was unable to find any when I searched

for one.

The SoftPot strips are manufactured by Spectra Symbol (and purchased from Sparkfun)

and work very well. The Teensy also functions properly. It was purchased from pjrc.com. There

are other similar devices available elsewhere. Adafruit.com has a viable alternative for a Teensy.

It is also possible to use the ATMEGA32U4 microcontroller directly.

The main social impact of this project is that of any other musical instrument: it helps

musicians to make music. This project will help them make piano music in a new way that is not

seen in other instruments.

8 - Cost Estimate

Table 8.1 contains an estimate of the cost of the physical hardware of the project:

Table 8.1 – Cost estimate of hardware

Item Number of pieces required Cost for One Total Cost of Item

Teensy 1 $16.00 $16.002-d Pads 2 $27.80 $27.801-d Strips 3 44.85 $44.85Project Housing/Enclosure 1 $10.00 $10.00

Total: $98.65

32

I spent approximately 250 hours working on this project. Much of this time was spent

researching, designing, and documenting the system. Building and implementing the system,

given the information conveyed in this document, should take much less time.

9 - Project Management Summary

This section provides information regarding project management. In particular it

discusses the tasks associated with the completion of this project and the personnel involved.

9.1 - Tasks Completed

I completed the following tasks in creating this project:

• I recorded necessary piano waveforms

• I processed these waveforms and derived individual periods

• I created and implemented a synthesis algorithm

• I prototyped the hardware

• I interfaced the hardware and a PC via USB

• I created a hardware enclosure

• I programmed a microcontroller to collect ADC input data

• I interfaced with PC Audio

33

A Gantt chart showing my activities is given as Figure 9.1.

Figure 9.1 – Gantt chart

9.2 - Personnel

I, Jacob Nieveen, built the VTPS. I am a graduate student of Electrical Engineering and I

am very interested in Audio Signal Processing as well as pianos. This interest gave me the drive

to complete this project. I am familiar with many aspects of Digital Signal Processing, and this

background helped me significantly in completing the project.

10 - Conclusion

The VTPS is a piano synthesizer that allows the user fine-tuned control over the tone of

synthesized notes. This control comes through a 2-d touchpad which the user uses to select

between tones. This interface brings a new type of instrument to the market. A good next step for

the VTPS would be to implement the tone control pad with a full-sized piano keyboard. The

ideas that apply to the VTPS could also be used with many different types of instruments. The

functionality of many commercial keyboard synthesizers could be extended.

Questions or comments regarding the VTPS project should be directed to me, Jacob

Nieveen, through my email: [email protected].

34