Secrets to writing a winning grant
Experienced scientists reveal how to avoid application pitfalls to submit successful proposals.
When Kylie Ball begins a grant-writing workshop, she often alludes to the funding successes and failures that she has experienced in her career. “I say, ‘I’ve attracted more than $25 million in grant funding and have had more than 60 competitive grants funded. But I’ve also had probably twice as many rejected.’ A lot of early-career researchers often find those rejections really tough to take. But I actually think you learn so much from the rejected grants.”
Grant writing is a job requirement for research scientists who need to fund projects year after year. Most proposals end in rejection, but missteps give researchers a chance to learn how to find other opportunities, write better proposals and navigate the system. Taking time to learn from the setbacks and successes of others can help to increase the chances of securing funds, says Ball, who runs workshops alongside her role as a behavioural scientist at Deakin University in Melbourne, Australia.
Do your research
Competition for grants has never been more intense. The European Commission’s Horizon 2020 programme is the European Union’s largest-ever research and innovation programme, with nearly €80 billion (US$89 billion) in funding set aside between 2014 and 2020. It reported a 14% success rate for its first 100 calls for proposals, although submissions to some categories had lower success rates. The commission has published its proposal for Horizon Europe, the €100-billion programme that will succeed Horizon 2020. In Australia, since 2017, the National Health and Medical Research Council has been funding less than 20% of proposals it receives. And the US National Science Foundation (NSF) received 49,415 proposals and funded 11,447 of them in 2017 — less than 25%. That’s tens of thousands of rejections in a single year from the NSF alone.
Being a renowned scientist doesn’t ensure success. On the same day that molecular biologist Carol Greider won a Nobel prize in 2009, she learnt that her recently submitted grant proposal had been rejected. “Even on the day when you win the Nobel prize,” she said in a 2017 graduation speech at Cold Spring Harbor Laboratory in New York, “sceptics may question whether you really know what you’re doing.”
To increase the likelihood of funding success, scientists suggest doing an extensive search of available grants and noting differences in the types of project financed by various funding bodies. Government agencies such as the NSF tend to be interested in basic science that addresses big, conceptual questions, says Leslie Rissler, programme director at the NSF’s Division of Environmental Biology in Alexandria, Virginia. A private foundation, however, might prioritize projects that inform social change or that have practical implications that fit into one of its specific missions.
Pitching a proposal
Before beginning an application, you should read descriptions and directions carefully, advises Ball, who recently pored over 200 pages of online material before starting a proposal. That effort can save time in the end, helping researchers to work out which awards are a good fit and which aren’t. “If you’re not absolutely spot on with what they’re looking for, it may not be worth your time in writing that grant,” she says.
Experienced scientists suggest studying successful proposals, which can often be acquired from trusted colleagues and supervisors, university libraries or online databases. A website called Open Grants, for example, includes more than 200 grants, both successful and unsuccessful, that are free to peruse.
Grant writers shouldn’t fear e-mailing or calling a grants agency to talk through their potential interest in a project, advises Amanda Stanley, executive director at COMPASS, a non-profit organization based in Portland, Oregon, that supports environmental scientists. For six years, she worked as a programme officer for the Wilburforce Foundation in Seattle, Washington, which supports conservation science. At this and other private foundations, the application process often begins with a ‘soft pitch’ that presents a brief case for the project. Those pitches should cover several main points, Stanley says: “‘Here’s what I’m trying to do. Here’s why it’s important. Here’s a little bit about me and the people I’m collaborating with. Would you like to talk further?’” She notes that a successful proposal must closely align with a foundation’s strategic goals.
Each organization has its own process, but next steps typically include a phone conversation, a written summary and, finally, an invitation to submit a formal application. “Once you’ve gotten that invitation to submit a proposal from the programme officer, your chances of getting funded are really, really high,” Stanley says.
The write stuff
Applicants should put themselves in the shoes of grant reviewers, who might need to read dozens of applications about complicated subjects that lie outside their own fields of expertise, often while juggling their own research.
“Imagine you’re tired, grumpy and hungry. You’ve got 50 applications to get through,” says Cheryl Smythe, international grants manager at the Babraham Institute, a life-sciences research institution in Cambridge, UK. “Think about how you as an applicant can make it as easy as possible for them.”
Formatting is an important consideration, says Aerin Jacob, a conservation scientist at the Yellowstone to Yukon Conservation Initiative in Canmore, Canada. White space and bold headings can make proposals easier to read, as can illustrations. “Students are tempted and sometimes encouraged to squeeze in as much information as possible, so there are all kinds of tricks to fiddle with the margin size, or to make the font a little bit smaller so that you can squeeze in that one last sentence,” Jacob says. “For a reviewer, that’s exhausting to read.”
Ball advises avoiding basic deal-breakers, such as spelling errors, grammatical slips and lengthy proposals that exceed word limits. Those kinds of mistake can cast doubt on how rigorous applicants will be in their research, she says. A list of key words, crucial for indexes and search engines, should be more than an afterthought, Ball adds. On a proposal for a project on promoting physical activity among women, she tagged her proposal with the word ‘women’. The descriptor was too broad, and her application ended up with a reviewer whose expertise appeared to be in sociology and gender studies instead of in exercise or nutrition. The grant didn’t score well in that round of review.
To prevent a reviewer’s eyes from glazing over, Jacob says, use clear language instead of multisyllabic jargon. When technical details are necessary, follow up a complex sentence with one that sums up the big picture. Thinking back to her early proposals, Jacob remembers cramming in words instead of getting to the point. “It was probably something like, ‘I propose to study the heterogeneity of forest landscapes in spatial and temporal recovery after multiple disturbances,’ rather than, ‘I want to see what happens when a forest has been logged, burnt and farmed, and grows back,’” she says.
Grants can be more speculative and more self-promotional than papers are, Rissler adds. “A grant is about convincing a jury that your ideas are worthy and exciting,” she says. “You can make some pretty sweeping generalizations about what your proposed ideas might do for science and society in the long run. A paper is much more rigid in terms of what you can say and in what you must say.”
Getting some science communication training can be a worthwhile strategy for strengthening grant-writing skills, Stanley says. When she was reviewing pitch letters for a private foundation, she recalls that lots of scientists couldn’t fully explain why their work mattered. But when she received pitches that were clear and compelling, she was more willing to help those scientists brainstorm other possible funding agencies if her foundation wasn’t the right fit. Scientists who sent strong — albeit unsuccessful — applications were also more likely to get funding from the foundation for later projects.
To refine project pitches and proposals, Stanley recommends that scientists use a free communication tool from COMPASS called the Message Box Workbook, which can help to identify key points and answer the crucial question for every audience: ‘So what?’ Scientific conferences often provide symposia or sessions that include funders and offer helpful tips for writing grants. And development officers at institutions can help scientists to connect with funders. “A good development officer is worth their weight in gold,” Stanley says. “Make friends with them.”
Jacob has taken science-communication training through COMPASS, The Story Collider (a science-storytelling organization) and from other such organizations. She has learnt how to talk about her work in the manner of a storyteller. In proposals and interviews, she now includes personal details, when relevant, that explain the problems she wants to address and why she decided to speak out about conservation — an example of the kind of conflict and resolution that builds a good story. Jacob senses that the approach strikes a chord. “As a reviewer, you remember somebody’s proposal just that little bit more,” she says. “If you have a stack of proposals, you want to find the one that you connect with.”
A clear focus can help to boost a grant to the top of a reviewer’s pile, Ball adds. In one of the first large grants that she applied for, she proposed collecting information on the key factors that prevent weight gain as well as designing and implementing an obesity-intervention programme. In retrospect, it was too much within the grant’s two-year time frame. She didn’t get the funding, and the feedback she received was that it would have worked better as two separate proposals. “While it’s tempting to want to claim that you can solve these enormous, challenging and complex problems in a single project,” Ball says, “realistically, that’s usually not the case.”
Teaming up with collaborators can also increase the chance of success. Earlier this year, Ball was funded by the Diabetes Australia Research Program for a study that she proposed in collaboration with hospital clinicians, helping disadvantaged people with type 2 diabetes to eat healthy diets. Earlier in her career, she had written grants based on her own ideas, rather than on suggestions from clinicians or other non-academic partners. This time, she says, she focused on a real-world need rather than on her own ideas for a study. Instead of overreaching, she kept the study small and preliminary, allowing her to test the approach before trying to get funding for larger trials.
It is acceptable — even advisable — to admit a study’s limitations instead of trying to meet preconceived expectations, Jacob adds. In 2016, she had a proposal rejected for a study on spatial planning on the west coast of Canada that would, crucially, be informed by knowledge from Indigenous communities. She resubmitted the same proposal the next year to the same reviewers, but with a more confident and transparent approach: she was straightforward about her desire to take a different tack from the type of research that had been tried before. This time, she made it clear that she wanted to listen to Indigenous peoples and use their priorities to guide her work. She got the funding. “I saw that if I tried to change it to meet what I thought funders wanted, I might not be accurately representing what I was doing,” she says. “I just wanted to be really clear with myself and really clear with the interviewers that this is who I am, and this is what I want to do.”
What not to do
Writing is hard, and experienced grant writers recommend devoting plenty of time to the task. Smythe recommends setting aside a week for each page of a proposal, noting that some applications require only a few pages while major collaborative proposals for multi-year projects can run to more than 100 pages. “It can take months to get one of these together,” she says.
Scheduling should include time for rewrites, proofreads and secondary reads by friends, colleagues and family members, experts say. Working right up to the deadline can undo weeks to months of hard work. At the last minute, Jacob once accidentally submitted an earlier draft instead of the final version. It included sections that were bolded and highlighted, with comments such as, “NOTE TO SELF: MAKE THIS PART SOUND BETTER.” She didn’t get that one, and has never made the same mistake again.
Add an extra buffer for technology malfunctions, adds Smythe, who once got a call from a scientist at another organization who was in a panic because his computer had stopped working while he was trying to submit a grant proposal half an hour before the deadline. She submitted it for him with 23 seconds to spare. “My hand was shaking,” she says. That proposal was not successful, although the scientist sent her a nice bottle of champagne afterwards.
Grant writing doesn’t necessarily end with a proposal’s submission. Applicants might receive requests for rewrites or more information. Rejections can also come with feedback, and if they don’t, applicants can request it.
Luiz Nunes de Oliveira, a physicist at the University of São Paulo, Brazil, also works as a programme coordinator at the São Paulo Research Foundation. In this role, he sometimes meets with applicants who want to follow up on rejected proposals. “We sit down and go through their résumé, and then you find out that they had lots of interesting stuff to say about themselves and they missed the opportunity,” he says. “All it takes is to write an e-mail message asking [the funder] for an interview.”
Jacob recommends paying attention to such feedback to strengthen future proposals. To fund her master’s programme, she applied for a grant from the Natural Sciences and Engineering Research Council of Canada (NSERC), but didn’t get it on her first try. After requesting feedback by e-mail (to an address she found buried on NSERC’s website), she was able to see her scores by category, which revealed that a few bad grades early in her undergraduate programme were her limiting factor.
There was nothing she could do about her past, but the information pushed her to work harder on other parts of her application. After gaining more research and field experience, co-authoring a paper and establishing relationships with senior colleagues who would vouch for her as referees, she finally secured funding from NSERC on her third try, two years after her first rejection.
Negative feedback can be one of the best learning experiences, Rissler adds. She kept the worst review she ever received, a scathing response to a grant proposal she submitted to the NSF in 2003, when she was a postdoc studying comparative phylogeography. The feedback, she says, was painful to read. It included comments that her application was incomprehensible and filled with platitudes.
After she received that letter, which is now crinkled up in her desk for posterity, Rissler called a programme officer to ask why they let her see such a negative review. She was told that the critical commenter was an outlier and that the panel had gone on to recommend her project for the grant, which she ultimately received. “I learnt that you do need to be tough,” says Rissler, who now helps to make final decisions on funding for other scientists. She emphasizes that whereas reviewers’ opinions can vary, all proposals undergo multiple independent expert reviews, followed by panel discussions and additional oversight by programme directors.
Grant writing tends to provoke anxiety among early-career scientists, but opportunities exist for people who are willing to take the time to develop ideas and push past rejections and negative feedback, she says. “We can’t review proposals that we don’t get.
Originally posted on nature.com on 20th December 2019 - https://www.nature.com/articles/d41586-019-03914-5
- Sponsored Content Article
Working Scientist podcast: Inside the NIH grant-review process
Julie Gould and Elizabeth Pier discuss how the US National Institutes of Health grant review process works. Your browser does not support the audio element. In this first episode of a six-part weekly series about funding, Julie Gould outlines the US National Institutes of Health (NIH) grant-review process and the extent to which reviewers evaluating the same applications agree or disagree. Is the current system the best way, she asks Elizabeth Pier, lead author of a March 2018 paper published in Proceedings of the National Academy of Sciences, Low agreement among reviewers evaluating the same NIH grant applications. Paid content This episode concludes with a slot sponsored by the European Research Council. Jean-Pierre Bourguignon, its president, outlines the organization's role and remit as a grant funder. TRANSCRIPT Julie Gould and Elizabeth Pier discuss how the US National Institutes of Health grant review process works. Julie Gould: Happy 2019! I hope you’ve all managed to take some time to celebrate. As it’s a new year and new years often come with a makeover in one form or another, the Nature Careers team decided to give the podcast a makeover. As well as a new name, we’ve also got a new format. So, instead of our monthly episodes, we’re going to be producing more episodes in 2019 and grouping them together into different series, featuring six weekly episodes followed by a short break. So, here’s series one – funding – and as an added extra, each episode in this series will end with a ten-minute sponsored slot from the European Research Council. So, without any further ado, let’s go.... (Theme music) Hello, I’m Julie Gould and this is Working Scientist, a Nature Careers podcast. Grant funding plays such an overwhelming role in the career of an academic scientist, and the funders are all too aware of it. Now, I know that all researchers spend many sleepless nights and cups of coffee writing grant proposals, so when I first started doing the research for this series, I wanted to find the best experts to give you the best tips on how to write the best grant proposals to make things a little bit easier for you. But then I came across a research paper that made me stop and reflect. In March 2018, Elizabeth Pier – who was then a PhD student at the University of Wisconsin, Madison, in the Educational Psychology department – published a paper as part of her thesis in PNAS. The paper was entitled "Low agreement among reviewers evaluating the same NIH grant applications." When I first read this title, I thought, "Wait a minute, I thought the idea of the whole funding process was that the top proposals were being funded, the ones where everyone in the peer review system agreed that these were the best ideas supported by the best researchers to do the work." But clearly, this title shows that there’s something else going on in the background, so I wanted to find out more. The research was funded by the National Institutes of Health, which commissioned an independent study to examine the potential for bias to enter into the peer review process. The overarching goal of the whole project was to look for evidence of gender or racial bias, based on the characteristics of the PI or the application, and where in the process these biases might enter. Now, this particular piece of research from Pier is just one of the studies. It recreated a peer review panel to see how these meetings unfold and how they affect the decision-making process. Using previously accepted NIH project proposals, Pier explored this as part of her research. But before we go any further, it’s worth me outlining some of the basic steps of how the NIH proposal review system works. Now, just so you’re clear, these steps are the bare bones and they miss out a lot of the details, but they should give a flavour of what happens once you hit submit. So, the NIH uses a two-stage review process. In the first stage, between two and five reviewers individually evaluate each grant application, and they rate them using the NIH’s nine-point scale, with one for exceptional and nine for poor. They also record what they feel are the application’s strengths and weaknesses. The reviewers will then meet for what’s called a study section meeting to discuss their preliminary ratings. The discussion only looks at the top half of all the applications they have evaluated. The study section members then collectively assign a final rating, and this is averaged into a final priority score. So, that’s stage one. Then in the second stage, members of the NIH advisory councils use this priority score and the written critiques from the reviewers to make funding recommendations to the Director of the NIH institute or centre that awards the funding. So, given all that, I spoke to Elizabeth Pier who now works as a research manager at Education Analytics to find out more about her research. Elizabeth Pier: In this particular study, I was really interested in looking at the degree of agreement between different reviewers and what is even happening before the reviewers come together and how are reviewers going about scoring these applications based on their assessments. So, another way of putting that is: Are the reviewers agreeing not only on the score that they assign, but are they also identifying similar strengths and weaknesses in the critiques that they write prior to the meeting, and also what’s the relationship between that numeric score and the written evaluation? Julie Gould: So, there were some sobering results... Elizabeth Pier: We found that numerically speaking, there really was no agreement between the different individual reviewers in the score that they assigned to the proposals. We also found that when we were looking at the relationship between the strengths and weaknesses, written proposal, and the score that was assigned, we did see a relationship between the number of weaknesses that a reviewer would identify in their critique and the score that the reviewer assigned, but that relationship between the weaknesses and the score doesn’t hold up between different reviewers. Julie Gould: Another way of saying this is that the individual reviewers were really consistent – the more weaknesses they identified in the proposal, the lower the score awarded. But unfortunately, it appeared that each reviewer had a different idea of what a weakness is and what score that meant the proposal would ultimately be given. So, what this means is… Elizabeth Pier: ...we can’t really compare the evaluations of different reviewers and the degree of disagreement that we see in the scores seems to be a reflection of a different sense of calibration in what constitutes a bad score versus a good score. Julie Gould: The reviewers do come together for a meeting to discuss the papers based on the initial reviews, and in the meetings that Elizabeth Pier recreated… Elizabeth Pier: As you would predict and as people told us based on their intuition participating in these kinds of meetings, the range of scores does get smaller after discussions, so there’s a degree of consensus building within individual peer review panels, but the agreement between different panels actually got wider after discussion, and we had a unique opportunity here because we had four different panels that were evaluating the same applications. So in practice each application is only evaluated by one study section but for the purposes of this study we exploited that we had these four different groups looking at the same proposals. And so, in the process of building consensus within a given panel, different panels actually went further apart. Julie Gould: So really the outcome that you’re coming to is that it’s potentially better that these reviewers don’t meet? Elizabeth Pier: Our studies haven’t indicated any value or benefit in the sense of improving the consistency or reliability of the process. Julie Gould: But what about the variability in the quality of the proposal being discussed, doesn’t that make a difference? Elizabeth Pier: But we had to ask people to donate their applications and the summary statement that they received to us and the donations that we received just happened to be funded. And so, we tried to say that above a certain quality threshold, our results suggest that it’s essentially a random process and the meeting doesn’t seem to remove that randomness. However, I will say a caveat to that caveat is that the applications that get discussed in the meeting have already gone through triage, so only the top 50% of applications based on their preliminary score even get discussed in the meeting. So, what we are talking about is given that top 50% of proposals, after you’ve already excluded the ones that really have no chance of being funded initially, there really is a lot of randomness, but even more so, there’s already randomness such that the applications that have been weeded out so to speak and don’t get the opportunity to be discussed in the meeting might actually have a lot of merits. # Had it been assigned to a different panel with different reviewer it very well could have gone on to be discussed. Julie Gould: So, what you’re saying really is that luck plays a very large role in whether or not your research gets funded. Elizabeth Pier: Yes, that is what our results suggest. Above a certain degree, if you have a relatively competitive application, there aren’t any major issues that would immediately disqualify it to any kind of representative effort in the field, there’s a great deal of randomness and luck that we find in determining who does and does not get funding. Julie Gould: So, what does that mean for all those people who are spending all these hours and hours and hours and hours and hours on getting their funding applications organised and sorted and written up? I mean my heart goes out to them... Elizabeth Pier: Yes, my heart does as well. I mean it’s one of the reasons I studied this for my dissertation because it’s incredibly important for individual careers and also incredibly important for the progress of science, right? We want to make sure that the most deserving ideas are getting rewarded and funded and that it’s not just picking out of a hat. So, I mean there are a couple of pieces of silver lining. I think that we see evidence, especially as grants get resubmitted, that being responsive to reviewers’ critiques can play a strong role in conveying to reviewers improvement over time, and so there is something to be said for if you get rejected or you don’t get funded, having some tenacity and resubmitting that application and doing everything you can to address the reviewers’ critiques and feedback. It can make a potential difference. I also think that it’s important for folks not to take it personally. As academics, we definitely are used to rejection and used to plenty of times when we think we have really great ideas and reviewers of manuscripts or of grant applications don’t seem to agree with us, so I would encourage people to take a little bit of solace in that it’s not necessarily a reflection of the quality of the ideas but it’s more kind of a feature of the process. Julie Gould: How would you suggest then that the process is improved? Elizabeth Pier: There should be some assessments of whether what some scholars have called a "modified lottery system" could do. So, the idea being that there’s some initial screening process that experts do conduct to make sure, like I said, they’re kind of weeding out any really problematic proposals, things that are just wildly out of left field in terms of being feasible to complete given the budget or things like that. And then after that kind of initial screen then it really is just a random selection. And the reason I think that would be an improvement is because if the process is already random above a certain quality threshold, which our study suggests it is, we might as well save the money and the time involved to convene thousands of people and spend millions of dollars to have these meetings if the outcome is essentially the same as a random process. Julie Gould: Now, we’ll touch on the idea of a lottery-style funding system later on in the series, but what we can say now is that change is going to be slow – it always is in academia. Is there anything that can be done in the meantime, before this lottery style system or something completely different is created? Elizabeth Pier: Starting to accept the fact that it’s not a completely objective process, that humans are fallible, they are subjective, and when you’re asking experts to make very complex judgements about the potential likelihood of success of a project, that’s a really difficult decision that’s going to bring in a lot of heuristics and biases that go into their decision making. Julie Gould: My final question to Elizabeth was what advice have you got for anyone who’s currently writing a grant proposal to the NIH. Elizabeth Pier: One piece of advice, which is probably pretty obvious but I will say is backed by our findings, is that weaknesses are much more predictive of the score that reviewers will assign, rather than strengths. So, what that means is minimise as many weaknesses as you can. Julie Gould: So, after all of that, I’m intrigued. What do you think? How would you feel about a more lottery-style funding system? Please send in your thoughts to the Nature Careers team which you can do via Twitter, Facebook and LinkedIn. And over the next two episodes, I’ll be speaking to different experts on how to minimise the number of weaknesses within your funding application, in the hopes – fingers crossed – that you’ll have a bit more success and a bit more luck. Now, that’s all for this section of our Working Scientist podcast. We now have a slot sponsored by and featuring the work of the European Research Council. Thanks for listening. I’m Julie Gould. (Theme music) Jean-Pierre Bourguignon: So, my name is Jean-Pierre Bourguignon and my title is President of the European Research Council, which of course is supported by the European Union as through the European Commission. I’m a French mathematician, I should say. I spent most of my career in the CNRS (Centre National de la Recherche Scientifique). My field was differential geometry, but did a lot of work actually at the boundary of theoretical physics, general relativity and Dirac operators and these kind of topics, but still always as a mathematician. The European Research Council is actually an interesting story. It was created in 2007, so it’s now 11 years of age, and it was a long process. Myself, the first time I heard about the possibility of having an ERC was 1995, and it was a long effort by the scientific community, and step after step we had to convince people in the Commission, people in the European Council – namely the countries – that they should support such a project, but still it has a lot of very specific characteristics, particularly the power which has been given to its scientific council is considerable. It really was an innovation and the council has the responsibility of deciding on how to spend the money and how to do the evaluation. This is unique in the setting of the European Commission, that a group of 22 scientists are given such a responsibility and of course as President of the European Research Council, I have some very specific ones, which is to confirm the list of people who are granted and really guarantee the quality of the work done. The mission of the ERC was really to make Europe more attractive, to be a place where science can develop really in the most ambitious way and to push the ambition, particularly of young people, upward, that is to make them independent early enough and to take their vision on board. You know, we are at the stage of giving 1100 research grants this year, which is of course a very significant amount of money. The budget is now really over €2 billion per year, and we are covering all fields of science – that is physical sciences, engineering, including maths, computer science, and so on, life sciences, social sciences and humanities. If you want to know what you are doing, you need to talk and meet and discuss with the people you are funding and so I do travel a lot, particularly in Europe, to meet the people we call our grantees – the people who get the grants from the ERC – and this part of my job is really extremely worthwhile and extremely rewarding because the selection process is a very tough one – the typical success rate at this moment is 13%. And it means that people have all proposed very ambitious projects that are conditioned to be successful at the ERC, high risk again, we need to encourage the panels who are selecting people to really accept to take risks. And that’s one thing I hear regularly from grantees, telling me, ‘I submitted a very similar project to my national agency but then I was not funded, it was considered too risky, then I submitted to the ERC, and then the ERC funds me,’ so it makes a big difference. Another component which is very important in our strategy is the fact that the clause, which has been put in place by the European Research Council. Really, we have three categories at the moment for the individual grants, which is the starting grants, consolidator grants and advanced grants, and it means timed to PhD. So, starting grants – to 2 to 7 years, consolidator grants – 7 to 12 years, and advanced grants – there is no condition, it just means people who are already confirmed, and while doing that it means that in the end we are dedicating typically two thirds of our budget to the younger people, people who are typically below 40 years of age. Very often people get the belief that really if you are not from one of the leading research institutions in Europe, then you have no chance. This is not the case. I mean the institution which is your host institution is not part of the evaluation, which is really the key for the evaluation is the project. You have to show that you have thought of what kind of resources will be needed, and you describe them, but this is not the institution as such which is very important. So, it means that in particular in terms of the support we give, part of it could be also buying expensive equipment if you need them and if it’s not available in your institution. So, we consider the project, not just as helping the people, but helping the people also to set up the environment which will make it possible to get the project through. So, this is sometimes one misconception that people have, that they get the feeling that if they are coming from a smaller place, they have no chance. The number of institutions the ERC has been signing with is close to 800 now, so of course it’s quite a significant number of institutions based in Europe and of course some of the leading ones got more grants than others, but definitely even small institutions have been very successful at the ERC. One of the key things that the ERC is doing is empowering researchers. This is something very, very important for us, and a very good example for this is one of the specific characteristics of the ERC programme which is called portability, but the host institution is not part of the selection criteria – it’s just here to make sure that there is a legal body that is able to receive the contract and it gives a lot of power to the researchers. And one of the typical powers, what I mentioned, portability, which means that the researcher can change the host institution if he or she feels that it’s not given the proper treatment, or maybe they could give other personal reasons to do that. This is the whole philosophy behind it – we really want the researchers in the driver’s seat at the level of the council but also at the level of how they run their contract, and of course there is an institution behind it because you want to be sure that there is a legal basis for this, but we really want the researchers to be able to do their research in the best possible conditions. The map I have in front of me, which is the map of the world, has on top of it one of our mottos which is ‘Open to the world.’ One of the conditions to be funded is that you have to spend at least 50% of your time in Europe, but you can be from any country, and we want to be sure that Europe is the leader to tackle some of the most challenging scientific problems. At the very beginning, we could notice that the percentage of women who were applying to the ERC was less than the percentage of women in the scientific community, and we felt this was definitely not adequate. Also, we had for the ones who applied, the success rate was definitely lower than the success rate for men. Through very sustained efforts, and identifying the issue from the very beginning, I think we made very significant progress. So, first of all, the percentage of women applying to the ERC has been steadily growing. We are now basically at the level where the percentage of women applying to the ERC is very similar to the level of the percentage of women in the age group of the different goals we have. So, from that point of view, I think we really achieved something which means that there’s no some kind of resistance or reluctance of women to apply, so this is one step. And then, of course very important but I think the two are linked – the fact that in recent years, the ERC women have been on average more successful than men. It’s a very slight difference but since we started the situation shows the opposite. We are very pleased that all the efforts we made, particularly to tackle complicit bias or various other things have been more or less successful. I’ve been visiting many, many countries in Europe, in particular countries in the "EU13," because I feel you need to understand the real situation people are exposed to in the various countries, and they are the ones who joined Europe the most recently, and most of them located in the eastern part of Europe and it’s very important to realise that actually that situation can be quite different from one country to the next. It has to do with teaching load, it has to do with the power structures in the institutes, it has to do of course with the support which is available to people, so for me it was very, very important to meet the researchers because that’s for me the key point. Also, to meet the authorities in these countries and to understand in which environment they are operating because I think that’s the very best way. We, at the level of the European Research Council, have also introduced some help, in particular by encouraging the various countries, could be also regions, to support possibilities of researchers in typically underrepresented regions (it could be EU13 countries) – to really give them some possibility of spending some time with the support of their countries or their region, in ERC teams, so that they understand what it takes to submit a proposal, but also to understand better, to really also test their ideas with other people so that then they have a much better idea what it takes to submit a proposal and therefore they are better prepared personally, not just intellectually, to really submit a proposal in good conditions because they have seen what a difference it makes and also what also kind of effort you have to put in if you want to be successful. Originally posted on Nature Careers - 04 January 2019 - https://www.nature.com/articles/d41586-019-00016-0
- Sponsored Content Article
Working Scientist podcast: How to beat research funding's boom and bust cycle
Julie Gould and Michael Teitelbaum discuss the highs and lows of funding cycles and how to survive them as an early career researcher. Your browser does not support the audio element. In the penultimate episode of this six-part series on grants and funding, Julie Gould asks how early career researchers can develop their careers in the face of funding's "boom and bust" cycle and the short-termism it engenders. Governments are swayed by political uncertainty and technological developments, argues Michael Teitelbaum, author of Falling Behind? Boom, Bust, and the Global Race for Scientific Talent. In the US, for example, space research funding dramatically increased after Soviet Russia launched the Sputnik 1 satellite in 1957, ending after the 1969 moon landing. Similar booms followed in the 1970s, 80s, and 90s, says Teitelbaum, a Wertheim Fellow in the Labor and Worklife Program at Harvard Law School and senior advisor to the Alfred P. Sloan Foundation in New York. But he argues that they are unsustainable and can have a negative impact on the careers of junior scientists and their research. Will Brexit trigger a funding downturn, and if so, for how long? Watch this space, says Teitelbaum. Sponsored content: European Research Council (ERC) Retired Portuguese Navy Captain Joaquim Alves, a principal investigator at the Centre for the History of Science and Technology, University of Lisbon, leads the European Research Council project MEDEA-CHART, dedicated to the study of medieval and early modern nautical charts. He describes his career and the support he has received from the ERC. TRANSCRIPT Julie Gould and Michael Teitelbaum discuss the highs and lows of funding cycles and how to survive them as an early career researcher. Julie Gould Hello, I’m Julie Gould and this is Working Scientist, a Nature Careers podcast. Welcome to the fifth and penultimate episode of our series on funding. In the previous episode, we looked at a recent major upheaval in the UK science funding environment, with the creation of UK Research and Innovation. This time, we’re looking at some of the processes that determine how funding decisions are and have been made in the past, and what impact that these decisions can have on careers in scientific research. But before we go on, don’t forget that at the end of this Working Scientist podcast, we’ve got a ten-minute sponsored slot from the European Research Council. Right, so funding – how do governments decide where to put their money? Professor Michael Teitelbaum, a demographer at the Labor and Worklife Program at Harvard Law School, has studied how funding has been allocated in the US since the world wars, and he’s found that funding comes in cycles, and he calls them "alarm/boom/bust" cycles, and I asked Michael to give us a quick, simple introduction into what these cycles are. Michael Teitelbaum Government funding for basic research often runs in cycles. Politicians and governments decide that there needs to be more funding for basic research and they often will raise the funding quite rapidly to show a significant effect, but then are unable to sustain that rate of increase. Sometimes the funding even declines subsequently. So, you get a cycle of boom followed by bust, over a period of perhaps a decade. My conclusion is that this is quite unhealthy for basic research, which is a quintessentially long-term kind of activity involving long study periods to become fully professional, followed by long careers in basic research. If the funding increases sharply and then doesn’t continue to increase or declines, that is very destabilising for both basic research itself and for career prospects in basic research. Julie Gould And why do you think the governments react in such a way by actually putting quite considerable sums of money towards whatever basic research they’re aiming to fund? Michael Teitelbaum It’s not universal, but it’s common that governments are convinced by industry or by academic institutions that they have been funding basic research insufficiently, and they tend to over-respond to that kind of representation by increasing funding at levels that cannot be sustained over the longer term. Julie Gould Why would you say that these cycles are destructive towards the careers of researchers? Michael Teitelbaum Well, the problem is that basic research and careers in basic research are fundamentally long-term propositions, and this kind of funding which is for a period of years and then disappears is destabilising to a system that requires many years of graduate and advanced study and research to become a professional in basic research. And research projects that take many years to develop, you can’t really achieve a great deal in basic research in only a few years, and if you study for 8-10 years or more to become a research scientist, you might find yourself, with these short cycles of funding, you might find yourself finishing your studies just in time to face a very poor career situation in those fields. Julie Gould In his book called Falling Behind?: Boom, Bust, and the Global Race for Scientific Talent, Michael explored some of these "alarm/boom/bust" cycles in the US from the past century. Now one of the examples he uses in the book is the shock of the successful 1957 Soviet Union launch of the first satellite, Sputnik 1. Michael Teitelbaum This led to what I would consider to be a near political panic among leaders of the US government, especially people such as Lyndon Johnson who was then majority leader in the US Senate, and led to an enormous increase in funding for space and rocketry and controls for catching up with the Soviet Union in space. That cycle ended with the success of John F Kennedy’s promise to successfully land humans on the Moon and return them to Earth safely by the end of the 1960s. When that spectacular achievement was achieved, the political system tended to lose interest in the massive funding for the space programme and there was a bust. The third cycle in the 1980s was stimulated by then President Reagan’s so-called Strategic Defense Initiative - critics called it the Star Wars Initiative - which led to massive funding, but only short-term for that initiative. And then the final two cycles that I identify in the book were different in the sense that they weren’t military, they weren’t strategic in that sense. The first was the internet, the boom resulting from the internet becoming a commercial activity rather than a research or academic activity and the expansion therefore of the internet and other kinds of booms in the 1990s. Again, that was in the private sector not in the government sector. And finally, overlapping that was a decision by the US Congress and the presidential leadership of both parties to double funding over a five-year period for the National Institutes of Health. A massive increase for five years, averaging about 14% per year that then was followed by flat funding for subsequent years. Julie Gould So, what cycle are we in at the moment? Michael Teitelbaum One of the characteristics of a cycle like this is you don’t know it’s a cycle until it finishes, so we can’t be sure at this point that we’re in an ‘alarm/boost/bust’ cycle. We could just be in an alarm and boom cycle without a bust to follow – we will have to come back and talk in five years to see if there is a bust that ensues at the end. But the current boom situation is in information technology, in social media, in fields that are largely created by industry and particularly by firms in Silicon Valley and in the Seattle area, led by Intel and Microsoft in particular. In terms of their lobbying, they argue they cannot find the skilled personnel they need to remain competitive internationally, that there’s a shortage of skilled personnel in these fields. It’s not a new claim. It’s been a claim that was common in all of these other booms and busts over the previous half-century. But their goal is not to encourage a funding boom from the federal government for their fields because they are in the commercial sector and they’re profit-seeking firms. What they’re looking for – and they’ve been successful in their lobbying efforts – is large-scale access to temporary workers coming from low-wage countries, largely via visas with hot names like H1B and L1 and so on. They’ve been quite successful with getting these short-term, temporary workers – large numbers of them in the hundreds of thousands – claiming that otherwise they would not be able to continue to be competitive internationally. And then there’s also parallel lobbying from higher education groups. Their goals are indeed to increase research grant funding because it’s a very substantial source of revenue for them, but also to continue to have easy access to large numbers of international graduate students who pay full tuition. Julie Gould How can early career researchers keep track of these cycles and see and feel what’s happening and learn to navigate them? Michael Teitelbaum I think the key words would be pay attention and be flexible. If you’re an early career researcher or aspiring to be a researcher in one of these fields, you need to keep track of what we are discussing here in terms of increased funding from government sources or decreased funding, increased numbers of temporary visas or decreased numbers of temporary visas. All of these things will have some impact over time on your personal experience. So, you need to pay attention, for example, to the trajectories of key science funding agencies. I would say a way to do that is to pay attention to reports from credible publications that do report in an objective way on what is happening in the politics, if you will, of funding and of temporary visas. You would have to pay attention to the budget requests of key agencies and assess whether those requests are likely, if they are responded to positively, are they likely to be sustainable over the longer term, or are they likely to be short-term pulses of funding, which would be destabilising. And then those who are already doing research and are funded by government agencies need to be cautious in responding to requests for proposals that seem to be short-term pulses of funding or boom-type funding. They need to build a portfolio, I would say, of different funding sources, rather than depend on a particular source that seems to be flush with money at the moment but may not be in the future. In other words, the same kind of advice that any investment advisor would give to a client – that they should diversify their commitments and thereby reduce their exposure to risk in the future. Julie Gould Speaking of the future, the impact that political systems have on scientific funding and thinking back on the previous episode with James Wilsdon on the UK scientific funding environment, I asked Michael what he thought might happen - or not - with Brexit - or not. Michael Teitelbaum If that were to happen – I know there’s a great deal of concern in the UK among academic institutions in terms of whether they would be able to apply what has become quite a large amount of basic research funding from the European Union – I think that’s all up in the air now so I don’t think we can make any forecasts or projections about what will happen, but it’s an issue that I think should be watched. If I were a young scientist engaged in pursuing a career in basic research in the UK, I would be paying a lot of attention to this. Julie Gould Okay, well let’s chat again in five years’ time. Michael Teitelbaum Laughs. I don’t think we need five years for that one, that’s probably two years, but it’s not now – we can’t do it now. Julie Gould So, what does this all mean? Well, the long and short of it is we don’t know what’s going to happen in the future, but what I think we can say is that the funding environment at the moment is a difficult one to navigate, so the more skills and tools amassed for writing grant proposals will be vital for survival in the scientific workforce. In the final episode of this series, we’ll hear more about some alternative ways of distributing scientific funding that may alleviate some of the pressures that researchers face in the current, very competitive climate. Now, that’s all for this section of our Working Scientist podcast. We now have a slot sponsored by and featuring the work of the European Research Council. Joaquim Alves Gaspar tells of his work in cartography and with the European Research Council project MEDEA-CHART. Thanks for listening. I’m Julie Gould. Joaquim Alves Gaspar My name is Joaquim Alves Gaspar. I was born in Lisbon, Portugal 69 years ago. I joined the Portuguese Navy when I was 19, and I served for about 40 years. In 2006, that is 12 years ago, I started a PhD programme on the geometric analysis and numerical modelling of old nautical charts, which I completed in 2010. In my thesis, I have proposed and tested a series of cartometric methods, that means geometrical methods of analysis and numerical modelling, aimed at a better understanding of how old charts were constructed and used at sea. As soon as I got the degree, I was invited to become a member of a research centre in the Faculty of Sciences at the University of Lisbon, where I am now and where I have been working for eight years, first as a postdoctoral researcher and now, after winning the grant, as a principal investigator. Most of what I know about the technical and the scientific methods related to the history of nautical cartography, I learned it from the Navy. I am not only referring to the theoretical background which people can study from the books, but also to the actual experience of contacting a ship at sea, and using nautical charts for the planning and the execution of navigation. It was this knowledge and this experience that gave me the capacity to fully understand old charts, not only as historical artefacts, but images of the world, which is a traditional approach, but also and mostly as instruments to navigate. This is something that a traditional historian of cartography is not prepared to do. By looking into those charts with the eyes of a cartographer and of a navigator and with the assistance of the analytical and modelling tools that I have developed, I could establish a meaningful connection between the methods of chart construction in all kinds, of course, as described in the historical sources and the practice of navigation. This development has opened new and promising lands of research. That is what my ERC project is about. I applied to and I won a starting grant in the section S6 – that is the history of the human past. It was at that time the first ever Portuguese proposal to be accepted in that particular section. It was the first ever grant that was considered to a project on the history of cartography and also, as far as I know, no one is using these kinds of techniques to study old maps. The total amount of the grant is about €1.2 million, to be applied during five years. The funding will be mostly used to pay the six grantees now working with us to cover travel expenses and to buy some equipment. We have a team of eight members: the PI (myself), a retired Navy officer, a senior researcher who is a physicist who converted to the history of science and he is now the head of the department of history and philosophy science, a postdoctoral researcher who is also a physicist by education, three PhD students, a junior computer expert who is developing our information systems and a project manager and she is a neuroscientist by education. Of these, only one of the PhD students is an historian by education. This tells us something about what I have called the multidisciplinary nature of my project. The general objective of the project, as stated in my proposal, is to solve a series of questions which have, should I say, eluded historians of cartography for a very long time, pertaining to the birth, the technical evolution and the use of nautical charts during the Middle Ages and also the early modern period. For example, we want to clarify when, how, why and where the first nautical charts were constructed. This is a very popular subject among the international community of historians of cartography. Not only we have been very successful in bringing many of them to the discussion, but also significant progress has been made in the last year. For example, it is now consensual among us that the oldest nautical charts were constructed using navigational information collected by the pilot at sea. Certain distortions affecting the old charts were caused by the use of magnetic compasses to navigate, which as you know, don’t point exactly to the geographical north. The difference is the so-called declination, magnetic declination. The novelty in my project is that we intend to provide good answers to those questions by using what we call a multidisciplinary approach including a novelty of techniques of geometrical analysis, numerical modelling, carbon-14 dating and multispectral analysis of the old parchments, which will complement, of course, the traditional methods of historical research. So far, one and a half years after the project started, the results are promising. Aim the highest possible and don’t just give it a try – do it using everything you’ve got. Don’t be humble. ERC grants are intended to be given to the very best researchers proposing the best projects. If you are confident that you have an excellent idea, one that will make the panel members raise out of their chairs, and that you are the right person to make it work, then don’t be shy. Go for it. However, having made the decision of proceeding to the next stage, you will now need a great deal of humbleness to be able to create the best possible proposal. The reason is that you will have to engage into an extremely competitive process with highly competent and motivated people. In other words, you will have to work hard and be professional. It took me a full year to write the proposal, despite my experience and background. Let me elaborate a bit on this. You know you have a wonderful idea, otherwise you wouldn’t have engaged in the process. The job now will be to organise each idea into a meaningful and visible project, and of course, to convince the evaluation panel that you are the best possible person to make it work. Don’t leave anything to fortune or chance, so that you won’t blame yourself for not taking into account all the variables. That’s all I have to advise. One of the unwritten goals of the project is to pass the message. I won’t live forever and I want my methods and my techniques to be passed and to be used again by other people, and Portugal is the best place because I also want to give a push to the research on the subject of Portugal. Originally posted on Nature Careers - 01 February 2019 - https://www.nature.com/articles/d41586-019-00403-7
- Sponsored Content Article
Working Scientist podcast: The grant funding lottery and how to fix it
Julie Gould discusses some radical alternatives to the current grant funding system to help address bias and better support early career researchers Your browser does not support the audio element. In the final episode of our six-part series on funding, Feric Fang, a professor in the departments of laboratory medicine and microbiology at the University of Washington, Seattle, describes how a two-tier "modified lottery" could be a fairer process, with grants randomly prioritised to applications that had some merit but did not attract funding first time round. New Zealand's Health Research Council already operates a similar system, says Vernon Choy, the council's director of research investments and contracts. Its Explorer Grants panel does not discuss rankings but instead judges if an application's proposals are viable and if they meet an agreed definition of "transformative." These applications then go into a pool and a random number generator is applied to allocate funding based on the budget available. Because applications are anonymised, Choy says there is no bias against a particular institution or research team, allowing young and inexperienced researchers to compete more fairly against senior colleagues. Johan Bollen, a professor at Indiana University's school of informatics, computing and engineering, describes how a Self Organising Funding Allocation system (SOFA) would work, removing the burden of writing grant applications. "What if we just give everybody a pot of money at the beginning of the year and then redistribute a certain percentage to others?" he asks. Paid content: European Research Council "We are open to the world" says European Research Council president Jean-Pierre Bourguignon. Its grantees straddle 80 nationalities and the organisation has signed collaboration agreements with 11 countries, including China, India, Brazil, Australia and Japan. Helen Tremlett, who leads the pharmacoepidemiology in multiple sclerosis research group at the University of British Columbia, Canada, spent time in the lab of an ERC grantee at the Max Planck Institute in Munich, Germany. This experience, along with the publication of a 2011 paper in Nature looking at how the gut microbiome may be influential in triggering the animal model of MS, had career-changing consequences, leading her down a new research path. TRANSCRIPT Julie Gould discusses some radical alternatives to the current grant funding system to help address bias and better support early career researchers Julie Gould: Hello, I’m Julie Gould and this is Working Scientist, a Nature Careers podcast. This is the final episode of our series on funding, but just a quick note, don’t forget that there’s also a final ten-minute sponsored slot at the end of this Working Scientist podcast from the European Research Council. Now, throughout this series, we’ve heard a lot about funding – what’s the best way to prepare for writing a grant, how to write that grant, how to make sure it gets read, how to prepare for an interview should you have one, and then we looked a little bit broader at the funding environment. Now, one of the things that I found really interesting, if we look back at the very first episode, is something that Elizabeth Pier said about what her research suggested.... Elizabeth Pier: Given that top 50% of proposals, after you’ve already excluded the ones that really have no chance of being funded initially, there really is a lot of randomness, but even more so, there’s already randomness, such that the applications that have been weeded out, so to speak, and don’t get the opportunity to be discussed in the meeting, might actually have a lot of merits. Had it been assigned to a different panel with different reviewers, it very well could have gone on to be discussed. Julie Gould: So, what you’re saying really is that luck plays a very large role in whether or not your research gets funded. Elizabeth Pier: Yes, that is what our results suggest. Julie Gould: And then add to that what Michael Teitelbaum mentioned in our fifth episode, that the NIH has experienced a period of flat funding for the last couple of decades, which has added stress to the system. Michael Teitelbaum: In the 1990s, was a decision by the US Congress and the presidential leadership of both parties to double funding over a five-year period for the National Institutes of Health, a massive increase for five years, averaging about 14% per year, that then was followed by flat funding for subsequent years. Julie Gould: As Michael mentioned, it’s difficult to tell whether or not you’re going to be in a boom/bust cycle when you’re actually in it, but this prolonged period of flat funding might not be part of a cycle at all. It might be a new norm. Ferric Fang: And I think for a long time, people thought this is going to be cyclical, and things are good and then they’re bad and then they’re good again, and we just have to wait. But I think it’s gradually dawned on people that it’s not cyclical in any kind of an orderly way, and that it may be the new normal for scientific funding, where there’s a shortage of funding for the size of the workforce and there’s a problem with job opportunities for new trainees, and this is something that I think is belatedly being addressed. Julie Gould: So, that was Ferric Fang and he’s a professor at the University of Washington in Seattle, and he, like many others, is concerned that the current funding system in the United States isn’t working. So, in a time when there’s inadequate funding for the size of the scientific workforce and the researchers are spending increasing amounts of time applying for this funding, what is the best way of allocating not-enough money to more researchers than the system can support? So, Ferric and a colleague of his, Arturo Casadevall, suggest that a modified lottery system, like the one Libby Pier suggested in the first episode of this series, could be the answer. Ferric Fang: And we came up with the idea of a two-tiered lottery system where initially there could be a review to divide grants into these two hypothetical stacks of high-quality grants and then the others, and the other grants could be sent back to be revised and hopefully improved and many of them could come back and eventually enter the lottery. And then you would have the other grants which are all judged to be of high enough quality to be supported, and then you would see how much funding was available, and you would randomly then prioritise the grants and you would fund accordingly. And you could introduce lots of nuances into the system, in terms of the number of grants that any given investigator could have in the lottery. Julie Gould: Now, as well as the benefits of reducing the amount of money and time spent on peer review, Ferric and Arturo argue that it could have wider implications for the entire funding environment. Ferric Fang: And a school that had a large number of researchers could be reasonably certain, based on laws of probability, that they would get a fairly predictable amount of funding based on the meritorious work that their researcher were doing, even though there would be little fluctuations. I think because of the large numbers it would even out. Another thing you could do is go to policymakers and say this is the amount of meritorious proposals that our scientific enterprise is producing and yet we’re only funding a small percentage of them, and this could be the basis for making more rational assessments of how much research funding should really be allocated in a budget. Julie Gould: I asked Ferric what he thought people might think of this modified lottery-style funding. Ferric Fang: I think a lot of people’s initial reaction to it would be that it would be leaving the future of the scientific enterprise to chance. But it’s no more irrational than trying to hedge your bets when you’re trying to invest economic resources for your future and trying to figure out how to make a diversified portfolio. We really want to make sure that our blind spots, in terms of our biases, aren’t preventing us from funding ideas that could really be transformative for society in the future. Julie Gould: Now actually, this modified lottery system does already exist. So, to fund any innovative and transformative research, the Health Research Council in New Zealand set up their explorer grant, which operates as a modified lottery system. Vernon Choy, who’s a director of Research Investments and Contracts at the Health Research Council, told me a little bit about their system and how it’s working for them. Vernon Choy: So, the way that it works is we do use a panel, but the panel does not discuss the ranking of the applications that come through to us. What they do is they provide us with an opinion on whether the application is transformative and we do have a particular definition of transformative. So, they must decide whether the application is transformative and they must also decide whether the application or the research proposed is viable. So, having reached the point where the panel agree that an application is fundable and meets the requirements of the explorer grant guidelines, then those applications go into a pool or thunderball and then we use a random number generator to allocate the funds to those applications using the random numbers that are then ranked, and then we fund according to that random rank to fit within the budget available for that particular round. Julie Gould: And how has this particular mode of allocating funding been received by the scientists and the health researchers in New Zealand? Vernon Choy: Well it’s surprising. At the time, we felt that this was going to be highly controversial, and in some respects, it was and still is, and obviously there has been a continued interest internationally in the explorer grant, but from our point of view, our researchers have accepted both the way that we allocate the funds and also the way that we determine eligibility or fundability. We did a survey back in 2017 of people that had applied for the fund to gauge their thoughts on both the format, the allocation method, the processes overall, and basically, we had quite good from everybody. One of the things – and I haven’t talked about this – but one of the things that we do is the applications are anonymised so that in determining whether an application is eligible, there’s no bias against any particular institution or against any particular team of researchers. When the process was first announced and we had a huge number of applications, and one of the reasons we were told was well this was a fund that allowed young, inexperienced researchers to compete against senior researchers and because there was no bias towards the experienced researchers. The other thing that we’ve investigated is the gender balance in the applications because of the anonymisation, and I would like to say that there was no gender bias in these applications, but from our initial look at numbers, there is still a slight bias towards men – not a huge one, only 3-4% – but it’s still slightly different between men and women. So, that’s difficult to say why that might be. Potentially it could be the style of text and the way that people write, but apart from that we’re quite happy with the explorer grant so far, and I’m expecting that the funds that we have available to allocate this way will increase. Julie Gould: So, time will only tell whether or not this is really a great system, and maybe expanding it further will give people a better idea of how it will work across a larger research system. But there are others who are taking different approaches, and one of these was by Johan Bollen who’s a professor at Indiana University. He and his colleague Marten Scheffer, out of sheer frustration with the time-consuming and expensive funding system that’s currently in place, thought well what if we just give everybody a pot of money at the beginning of the year, and then implement a rule where everybody has to redistribute a certain percentage of their money to another scientist. So, they’ve called it ‘self-organised funding allocation’ or SOFA for short. And here’s Johan describing how it works. Johan Bollen: Essentially, you’re a young researcher, you’ve just been hired as an assistant professor and at the end of the year you receive a fixed and unconditional amount of base funding in your research funding account at the university, and you know that the re-donation fraction is 50%, which means that you can keep 50% of that and then the other 50% you’d have to donate to other researchers of your choosing. You log into a website that could be run by the National Science Foundation and you enter the names. There could even be a pull-down list. There could even be, I wouldn’t call it a recommendations system, but an order completion system where you enter the names of the scientists that you would like to donate a fraction of that 50% to, and when the system has determined that you have completed the list of names and the relative fractions and it adds up the 50% of the money you have received, you hit submit and you’re done. The next year you receive the same base amount and perhaps funding from other scientists that saw you speak at a conference or that read your paper and really liked the work that you do and would like to support it. You add it all up, again you take 50% for your own research needs and the other 50% again, you log into the website and you enter the names of the individuals and how much money, or percentage of the money that you’re supposed to donate, that you wish to donate to them and then hit submit, and you’re done for another round. Julie Gould: But how would you then decide who to give your money to? I mean so you want to get rid of the time-consuming grant proposal writing – yes, I know it can be a painful process – but then how does a person decide who to give their money to if they don’t have all these grant proposals to read. Johan Bollen: This question is asked lots – how do you know who to give your money to – and the thing is that as scientists you’re supposed to know who does the most exciting work in your area. I mean that’s how we write our papers. If you look at the bibliographies in our papers, our references etc., they’re essentially a testament to the obligation that we have to stay abreast of the developments in our area. You’re not very good as a scientist if you don’t know about the work that’s happening in your research area. And so that same assumption is true if people would have to make decisions about who to pass their money on to, and so you can actually show mathematically that under the right conditions, this process of the money being passed from one person to the next could lead to convergence of funding across the entire community that reflects all of the knowledge in the system, not just of one particular individual, but of all individuals that participate in the system. Julie Gould: What would stop people from just funding their colleagues, their collaborators or even their friends? Johan Bollen: First of all, I don’t know whether that’s such a bad thing to begin with. People do collaborate and they don’t just collaborate within institutions, they collaborate externally, but if you’re really concerned about it, you could very easily enforce the exact same kind of conflict of interests rules that we have right now with respect to the submitting and review of proposals. For example, you could introduce a rule that you couldn’t donate to people within your same institution and, for example, that you couldn’t donate to the same people more than two years in a row. You could even mandate that a given fraction of your money goes to underrepresented groups. So, there’s a lot of social distortions that you could fix very easily by limiting on the basis of very reasonable arguments who to donate the money to. Julie Gould: And what about the early career researchers, those researchers that are just starting off in their career in science. How do they promote themselves in order to get some of the funding from other people? Johan Bollen: Well, first of all, everybody receives the same amount of funding regardless of your merit or how well-known you are, everybody receives the same base amount, so all of those young researchers have the base amount to begin with. Then of course there’s a challenge in getting your name out and convincing the community at large that you’re doing good work. That involves going to conferences, giving presentations, getting in touch with your colleagues. These are the kind of things that young researchers do anyway, but now of course it would be crucial to getting their name out. So, I think it would benefit the overwhelming majority of early career researchers. Julie Gould: Nobody really knows how this scientific funding system is going to organise itself over the coming years, but I would be really keen to hear your thoughts. What do you think of this concept of a self-organised funding system or even the modified lottery system which is already in place in New Zealand? Or have you got any experiments or paradigm-shifting ideas of how the funding system could be changed? If you have, get in touch – we would really like to hear from you. Something else we’d like to hear from you about is what series would you like to have on the Working Scientist podcast? So, we’ve now finished our series on funding, but what else do you want to know about? Each series will have five or six different episodes with a variety of experts on that particular topic, but we’d like to get your input into how to shape our future series. So, if you have any thoughts or burning desires about what you’d like to know more about then get in touch with the Nature Careers team. I want to give one final thank you to everybody who has contributed, so that would be Johan Bollen, Vernon Choy and Ferric Fang from this episode, as well as Michael Teitelbaum, James Wilsdon, Peter Gorsuch, Anne-Marie Coriat, Jernej Zupanc and Elizabeth Pier. Thank you again for contributing your thoughts and ideas to this series. And that is the end of this series on the Working Scientist podcast, but before you go, just a reminder that there is another last sponsored slot by and featuring the work of the European Research Council, and in this slot we hear from the President of the ERC, Jean-Pierre Bourguignon, and then also from Professor Helen Tremlett from the University of British Columbia in Canada. Thanks for listening. I’m Julie Gould. Jean-Pierre Bourguigno: My name is Jean-Pierre Bourguignon, and my function is to be the President of the European Research Council. I’ve been in this position for five years now and I still have one year to go in my mandate. The search for my successor has started. So, in the sense that when you have reached such a level of success, the first priority is of course making sure that you still are in a good position to continue with this success, and the main priority of 2019 will be to revisit basically every way we do the evaluation because we know we have some challenges. For example, for some of the panels we have reached a size which means that we have to think of organising slightly differently because to do a good job as evaluators, you cannot have too many applications because then you cannot dedicate enough attention to them. So, we are really going to go through a very, very thorough check of all our evaluation systems, of course, taking advantage of all the knowledge accumulated with the scientific people who are members of our panels for evaluation, but also really trying to get advantage over not being too frozen, too rigid or too persistent on the way we structure these… I’ve said we cover old domains of science, but science is changing all the time, so you want to be sure that you adapt to the new emerging fields quickly enough that you bring on board all the right competent people. So, this is really for the immediate future because that’s a priority for 2019, and we want also to announce the new way we want to do the evaluation early enough so that the scientific community will be ready for when it will be put in place in 2021, and the scientific community has absorbed these changes, understood them, and can really adopt them and in particular that we will be able to continue to convince the very best scientists in the world to participate in the evaluation. Well, first of all, I mean ‘open to the world’ is one of our mottos. It means, of course, already that we have on board scientists from I think about 80 nationalities, so it means it’s not just Europeans who are a part of it. But of course, another part is for the ERC to interact with agencies in other countries in the world. We have already 11 countries with which we have signed agreements. For the moment, these agreements are of the type that researchers from these countries funded by these agencies can visit and spend time in some of the ERC teams. Helen Tremlett: So, my name is Helen Tremlett. I’m a professor in the faculty of medicine and neurology at the University of British Columbia in Canada, and I’m Canada Research Chair in Neuroepidemiology and Multiple Sclerosis, and I’m a British citizen and a Canadian citizen. I’ve been here since 2001. I was part of a programme between the Canadian government and the ERC, which enabled Canada Research Chair holders to spend time in the lab of someone who holds ERC funding. So, it was a great opportunity to bring together individuals who have complimentary skills and can learn from each other and develop collaboration over the long term. And it was a wonderful opportunity. I was based at the Max Planck Institute on the edge of Munich and they were focused on the gut microbiome in multiple sclerosis. It was so exciting. So, 2011, I can even remember that day. Nature published a paper and they were looking at the animal model of multiple sclerosis and how the gut microbiome may be influential in terms of triggering the animal model of multiple sclerosis. I had no idea that people were even thinking about this, and this led me down a whole new research path and now I’m actually coordinating principal investigator on a study where we’re collecting stool samples from children with multiple sclerosis and controls across Canada and across the US, and so it was thrilling for me to spend time in the lab whose work had really pushed me onto this path, so it was a lot of fun. So, there’s no additional funds attached to it, but it just meant that it was a formal opportunity and your salary was continued as such without a break. You didn’t have to take it as a sabbatical leave or anything like that. And I was just there for two months, but it was a really great two months. Jean-Pierre Bourguignon: During my time we have signed agreements with China, with India, with Brazil, with Australia, with Japan, so of course, these are countries which worldwide play a very critical role. I should also mention South Africa with which we have also developed a very interesting collaboration. Still, we want for the future to actually have more tools. For example, for the moment, the tools we have are only the ones I describe – namely visits by scientists from these places to visit ERC teams. We hope that in the next framework programmes, some more agility will be given to the scientific council, and having the possibility to also accompany researchers from our teams who want to visit abroad. One of the very simple principles of international collaboration is typically reciprocity – that is what you make possible in one direction should be possible in the other direction. For the moment, as you heard, the only possibility is people from these countries to come and visit Europe. We would like also to help and accompany researchers from Europe who have got the ERC contracts to also be helped when they want to visit researchers from other countries outside Europe. Something we reintroduced very recently are the so-called Synergy calls, so it’s a different call from the other ones. They are really for individual principal investigators, as we name them. In the case of Synergy, it’s really to encourage more ambitious, more global projects with two, three or four PIs (principal investigators). Of course, the idea is not to create a consortium. It’s really the idea that people come up with a truly challenging scientific problem they want to address, and we call it Synergy because we want them to really convince us that they are really the right group of people to tackle this. So in particular, we see this as a very specific place where interdisciplinary work can be developed. So, in a sense we wanted to create such a space where really people who need resources and skills and knowledge, expertise from different fields, can come together to tackle a very well-identified problem and to do that together. And so, this has been we have run only one such call so far for the year 2018. We just published the results. So, 27 projects have been supported. I didn’t mention globally the number of projects we have supported – we are typically at 9,000 projects overall supported – but the Synergy project is for very interesting new challenges. So, this is another dimension that ERC and the scientific council wants to tackle – that is to acknowledge the great importance for the future development of research of interdisciplinary work, that people need to learn how to work together but the way we do it is again under the very strict bottom-up philosophy. We just want people to come up to us, come forward with ambitious projects and very challenging problems they want to tackle and to try and convince the evaluators that they are the right people to do that and that they have assembled really the people who can do that in the best possible way. So, this dimension of Synergy is also that we want to be sure that Europe is the leader to tackle some of the most challenging scientific problems. Originally posted on Nature Careers - 08 February 2019 - https://www.nature.com/articles/d41586-019-00525-y