The simplest type of human communication is non verbal signals: things like posture, facial expression, gestures, tone of voice. They are in effect contagious: if you are sad, I will feel a little sad, if I then cheer up, you may too. The signals are indications of emotional states and we tend to react to another’s emotional state by a sort of mimicry that puts us in sync with them. We can carry on a type of emotional conversation in this way. Music appears to use this emotional communication – it causes emotions in us without any accompanying semantic messages. It appears to cause that contagion with three aspects: the rhythmic rate, the sound envelope and the timbre of the sound. For example a happy musical message has a fairly fast rhythm, flat loudness envelop with sharp ends, lots of pitch variation and a simple timbre with few harmonics. Language seems to use the same system for emotion, or at least some emotion. The same rhythm, sound envelope and timbre is used in the delivery of oral language and it carries the same emotional signals. Whether it is music or language, this sound specification cuts right past the semantic and cognitive processes and goes straight to the emotional ones. Language seems to share these emotional signals with music but not the semantic meaning that language contains.
In recent years, numerous studies have shown how music hijacks our relationship with everyday time. For instance, more drinks are sold in bars when with slow-tempo music, which seems to make the bar a more enjoyable environment, one in which patrons want to linger—and order another round. Similarly, consumers spend 38 percent more time in the grocery store when the background music is slow. Familiarity is also a factor. Shoppers perceive longer shopping times when they are familiar with the background music in the store, but actually spend more time shopping when the music is novel. Novel music is perceived as more pleasurable, making the time seem to pass quicker, and so shoppers stay in the stores longer than they may imagine. […]
While music usurps our sensation of time, technology can play a role in altering music’s power to hijack our perception. The advent of audio recording not only changed the way music was disseminated, it changed time perception for generations. Thomas Edison’s cylinder recordings held about four minutes of music. This technological constraint set a standard that dictated the duration of popular music long after that constraint was surpassed. In fact, this average duration persists in popular music as the modus operandi today. […]
Neuroscience gives us insights into how music creates an alternate temporal universe. During periods of intense perceptual engagement, such as being enraptured by music, activity in the prefrontal cortex, which generally focuses on introspection, shuts down. The sensory cortex becomes the focal area of processing and the “self-related” cortex essentially switches off. As neuroscientist Ilan Goldberg describes, “the term ‘losing yourself’ receives here a clear neuronal correlate.” […]
But it is Schubert, more than any other composer, who succeeded in radically commandeering temporal perception. Nowhere is this powerful control of time perception more forceful than in the String Quintet. Schubert composed the four-movement work in 1828, during the feverish last two months of his life. (He died at age 31.) In the work, he turns contrasting distortions of perceptual time into musical structure. Following the opening melody in the first Allegro ma non troppo movement, the second Adagio movement seems to move slowly and be far longer than it really is, then hastens and shortens before returning to a perception of long and slow. The Scherzo that follows reverses the pattern, creating the perception of brevity and speed, followed by a section that feels longer and slower, before returning to a percept of short and fast. The conflict of objective and subjective time is so forcefully felt in the work that it ultimately becomes unified in terms of structural organization.
Although there has been some empirical research on earworms, songs that become caught and replayed in one’s memory over and over again, there has been surprisingly little empirical research on the more general concept of the musical hook, the most salient moment in a piece of music, or the even more general concept of what may make music ‘catchy’. […]
Every piece of music will have a hook – the catchiest part of the piece, whatever that may be – but some pieces of music clearly have much catchier hooks than others. […]
One study has shown that after only 400 ms, listeners can identify familiar music with a significantly greater frequency than one would expect from chance. […]
We have designed an experiment that we believe will help to quantify the effect of catchiness on musical memory. […] Hooked, as we have named the game, comprises three essential tasks: a recognition task, a verification task, and a prediction task. Each of them responds to a scientific need in what we felt was the most entertaining fashion possible. In this way, we hope to be able recruit the largest number of subjects possible without sacrificing scientific quality.
The explosion in music consumption over the last century has made ‘what you listen to’ an important personality construct – as well as the root of many social and cultural tribes – and, for many people, their self-perception is closely associated with musical preference. We would perhaps be reluctant to admit that our taste in music alters - softens even - as we get older.
Now, a new study suggests that - while our engagement with it may decline - music stays important to us as we get older, but the music we like adapts to the particular ‘life challenges’ we face at different stages of our lives.
It would seem that, unless you die before you get old, your taste in music will probably change to meet social and psychological needs.
One theory put forward by researchers, based on the study, is that we come to music to experiment with identity and define ourselves, and then use it as a social vehicle to establish our group and find a mate, and later as a more solitary expression of our intellect, status and greater emotional understanding.
No one had previously looked specifically at the differing responses in the brain to poetry and prose.
In research published in the Journal of Consciousness Studies, the team found activity in a “reading network” of brain areas which was activated in response to any written material. But they also found that poetry aroused several of the regions in the brain which respond to music. These areas, predominantly on the right side of the brain, had previously been shown as to give rise to the “shivers down the spine” caused by an emotional reaction to music.
Since the size, density, and even shape of a person’s skull is somewhat unique, that resonance will vary across individuals. Our current research was designed to explore whether this uniqueness in skull resonance might have a direct influence on the kinds of music a person prefers. […] this research suggests that the skull [shape and size] might influence the music that a person dislikes rather than the music a person likes.
Mozart’s opera, whose proper Italian title is Il dissoluto punito ossia il Don Giovanni (The Punishment of the Libertine or Don Giovanni), has been admired by many enthusiastic opera-goers ever since its first performance in Prague on October 29, 1787. […]
Kierkegaard offers a deep meditation on the meaning of Mozart’s Don Giovanni in a splendid treatise entitled “The Immediate Erotic Stages or the Musical Erotic” found in his book Either/Or. […]
George Price offers this fine description of the “aesthetic” stage of life as he thinks Kierkegaard sought to depict it:
By its very nature it is the most fragile and least stable of all forms of existence. […] [The aesthetic man] is merged into the crowd, and does what they do; he reflects their tastes, their ideas, prejudices, clothing and manner of speech. The entire liturgy of his life is dictated by them. His only special quality is greater or less discrimination of what he himself shall ‘enjoy’, for his outlook is an uncomplicated, unsophisticated Hedonism: he does what pleases him, he avoids what does not. His life’s theme is a simple one, ‘one must enjoy life’. […] He is also, characteristically, a man with a minimum of reflection. […]
Kierkegaard also uses Faust as Goethe interpreted him, and Ahasuerus, the Wandering Jew, as exemplars and variations of the aesthetic stage of existence. “First, Don Juan, the simple, exuberant, uncomplicated, unreflective man; then Faust, the bored, puzzled, mixed-up, wistful man; and the third, the inevitable climax, the man in despair—Ahasuerus, the Wandering Jew.” Kierkegaard’s discussion of this aesthetic aspect of life “is mainly a sustained exposition of a universal level of human experience, and as such it is a story as old as man. Here is life at it simplest, most general level […] the life of easy sanctions and unimaginative indulgences. It is also a totally uncommitted and ‘choiceless’ life [Don Juan]. But, for reasons inexplicable to itself, it cannot remain there. The inner need for integration brings its contentment to an end. Boredom intervenes; and boredom followed by an abortive attempt to overcome it by more discrimination about pleasures and diversions, about friends, habits and surroundings [Faust]. But the dialectical structure of the self gives rise to a profounder disturbance than boredom; and finally the man is aware of a frustration which nothing can annul [Ahasuerus, the Wandering Jew]. Were he constituted differently, says Kierkegaard, he would not suffer in this fashion. But being what he is, suffer he must— in diminishing hope and in growing staleness of existence.
{ Chris Cunningham’s original photo for Aphex Twin’s Windowlicker cover }
{ A spectrogram of “Windowlicker” reveals a spiral at the end of the song. This spiral is more impressive when viewed with an X-Y scatter graph, X and Y being the amplitudes of the L and R channels, which shows expanding and contracting concentric circles and spirals. The effect was achieved through use of the Mac-based program MetaSynth. This program allows the user to insert a digital image as the spectrogram. MetaSynth will then convert the spectrogram to digital sound and “play” the picture. | Wikipedia }
The Internet sells itself as an improved content-delivery service, giving you whatever you want, whenever you want—by no small coincidence is its premiere streaming-music site called Pandora—but there is an increasingly clear downside to opening the on-demand box. We no longer feel compelled to “own” music, because it has no scarcity value. Music has become ether, navigable by desire, or impulse, and so the need to patronize musicians, whom we were previously cowed into compensating by a protectorate of record labels, becomes not only optional, but indistinct.
We assume musicians are taken care of, because their music is getting to us, and in that way, they have succeeded—they have communicated, and they may even be famous as Joanna Newsom—but they will never profit, because neither they nor their ostensible parent labels control the medium by which we increasingly receive and interpret their work. To the extent their fame is driven and/or sustained online, artists are subservient to the Internet, and must engage with that audience on its terms, begging for donations—tithing—or prostituting themselves via cost-denominated signed copies, telephone calls, personal concerts and personalized songs.
From an office on Sunset Boulevard, a dapper 69-year-old has emerged as a go-to guy for musicians and songwriters looking for quick cash.
His name is Parviz Omidvar, and over the past two decades, he has been lending to artists and securing those debts with royalty payments his clients earn from their work. Michael Jackson was a customer, as is the son of late Motown legend Marvin Gaye. Omidvar’s website carries an old testimonial from Rock and Roll Hall of Fame member Bobby Womack: “Thank you so much for always being there for me.”
Today, Womack is suing Omidvar for fraud. He alleges the financier tricked him into selling for $40,000 full control of a royalty stream that annually pays many times that amount on Womack-penned hits, including blaxploitation classic “Across 110th Street” and “It’s All Over Now,” the first U.S. No. 1 record for the Rolling Stones. Womack’s lawyer says the 68-year-old musician was misled into signing the deal in April last year, when he was incapacitated by painkillers following prostate cancer surgery.
Omidvar calls Womack’s claim “a simple case of buyer’s remorse.” Womack understood he was selling his royalties, and his allegations are “a complete lie,” Omidvar says.
Omidvar’s quick cash can come at a steep price. Reuters found scores of loans with interest rates ranging from 1.5 to 2.5 percent every 10 to 15 days - annualized rates potentially ranging from 43 percent to 81 percent.
{ My Bloody Valentine have announced a headlining slot at Japan’s Tokyo Rocks festival in May 2013, where they will be playing exclusive material from a brand new album. The album, the very-long-awaited follow-up to 1991’s classic ‘Loveless’, has been 21 years in the making. | NME }
According to a songwriting blogger named Graham English, a typical pop song has anywhere from 100 to 300 words, with the Beatles at the low end of that scale and the verbose Bruce Springsteen at the high end. (Don McLean’s epic “American Pie,” for those who wonder, clocks in at 324 words.) […]
[Rihanna’s “Diamonds,”] 67 words. Underwhelming. But at least it’s more complex than “Where Have You Been,” […] 40 distinct words.
This article investigates how the law is perceived in hip-hop music. Lawyers solve concrete legal problems on basis of certain presuppositions about morality, legality and justice that are not always shared by non-lawyers. This is why a thriving part of academic scholarship deals with what we can learn about laymen’s perceptions of law from studying novels (law and literature) or other types of popular culture. This article offers an inventory and analysis of how the law is perceived in a representative sample of hip-hop lyrics from 5 US artists (Eminem, 50 Cent, Dr. Dre, Ludacris and Jay-Z) and 6 UK artists (Ms Dynamite, Dizzee Rascal, Plan B, Tinie Tempah, Professor Green and N-Dubz).
After a methodological part, the article identifies four principles of hip-hop law. First, criminal justice is based on the age-old adage of an eye for an eye, reflecting the desire to retaliate proportionately. Second, self-justice and self-government reign supreme in a hip-hop version of the law: instead of waiting for a presumably inaccurate community response, it is allowed to take the law into one’s own hands. Third, there is an overriding obligation to respect others within the hip-hop community: any form of ‘dissing’ will be severely punished. Finally, the law is seen as an instrument to be used to one’s advantage where possible, and to be ignored if not useful. All four principles can be related to a view of the law as a way to survive in the urban jungle.
Most of the occasional singers sang as accurately as the professional singers. Thus, singing appears to be a universal human trait.
However, two of the occasional singers maintained a high rate of pitch errors at the slower tempo. This poor performance was not due to impaired pitch perception, thus suggesting the existence of a purely vocal form of tone deafness.
Researchers have confirmed what many suspected - pop music over the last five decades has grown progressively more sad-sounding and emotionally ambiguous. They analyzed the tempo (fast or slow) and mode (major or minor) of the most popular 1,010 pop songs identified using year-end lists published by Billboard magazine in the USA from 1965 to 2009. Tempo was determined using the beats per minute of a song, and where this was ambiguous the researchers used the rate at which you’d clap along. The mode of the song was identified from its tonic chord - the three notes played together at the outset, in either minor or major. Happy sounding songs are typically of fast tempo in major mode, whilst sad songs are slow and in minor. Songs can also be emotionally ambiguous, having a tempo that’s fast in minor, or vice versa.
The researchers found that the proportion of songs recorded in minor-mode has increased, doubling over the last fifty years. The proportion of slow tempo hits has also increased linearly, reaching a peak in the 90s. There’s also been a decrease in unambiguously happy-sounding songs and an increase in emotionally ambiguous songs.
Scientists have worked out that modern pop music really is louder and does all sound the same.
Researchers in Spain used a huge archive known as the Million Song Dataset, which breaks down audio and lyrical content into data that can be crunched, to study pop songs from 1955 to 2010. A team led by artificial intelligence specialist Joan Serra at the Spanish National Research Council ran music from the last 50 years through some complex algorithms and found that pop songs have become intrinsically louder and more bland in terms of the chords, melodies and types of sound used.
“Rocket Queen” is the closing song of American hard rock band Guns N’ Roses’ debut studio album Appetite for Destruction. […]
Axl wanted some pornographic sounds on Rocket Queen, so he brought a girl in and they had sex in the studio. We wound up recording about 30 minutes of sex noises. If you listen to the break on Rocket Queen it’s in there.