Search results for: mapping methodologies
50 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms
Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli
Abstract:
Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning
Procedia PDF Downloads 7249 Prevalence and Diagnostic Evaluation of Schistosomiasis in School-Going Children in Nelson Mandela Bay Municipality: Insights from Urinalysis and Point-of-Care Testing
Authors: Maryline Vere, Wilma ten Ham-Baloyi, Lucy Ochola, Opeoluwa Oyedele, Lindsey Beyleveld, Siphokazi Tili, Takafira Mduluza, Paula E. Melariri
Abstract:
Schistosomiasis, caused by Schistosoma (S.) haematobium and Schistosoma (S.) mansoni parasites poses a significant public health challenge in low-income regions. Diagnosis typically relies on identifying specific urine biomarkers such as haematuria, protein, and leukocytes for S. haematobium, while the Point-of-Care Circulating Cathodic Antigen (POC-CCA) assay is employed for detecting S. mansoni. Urinalysis and the POC-CCA assay are favoured for their rapid, non-invasive nature and cost-effectiveness. However, traditional diagnostic methods such as Kato-Katz and urine filtration lack sensitivity in low-transmission areas, which can lead to underreporting of cases and hinder effective disease control efforts. Therefore, in this study, urinalysis and the POC-CCA assay was utilised to diagnose schistosomiasis effectively among school-going children in Nelson Mandela Bay Municipality. This was a cross-sectional study with a total of 759 children, aged 5 to 14 years, who provided urine samples. Urinalysis was performed using urinary dipstick tests, which measure multiple parameters, including haematuria, protein, leukocytes, bilirubin, urobilinogen, ketones, pH, specific gravity and other biomarkers. Urinalysis was performed by dipping the strip into the urine sample and observing colour changes on specific reagent pads. The POC-CCA test was conducted by applying a drop of urine onto a cassette containing CCA-specific antibodies, and the presence of a visible test line indicated a positive result for S. mansoni infection. Descriptive statistics were used to summarize urine parameters, and Pearson correlation coefficients (r) were calculated to analyze associations among urine parameters using R software (version 4.3.1). Among the 759 children, the prevalence of S. haematobium using haematuria as a diagnostic marker was 33.6%. Additionally, leukocytes were detected in 21.3% of the samples, and protein was present in 15%. The prevalence of positive POC-CCA test results for S. mansoni was 3.7%. Urine parameters exhibited low to moderate associations, suggesting complex interrelationships. For instance, specific gravity and pH showed a negative correlation (r = -0.37), indicating that higher specific gravity was associated with lower pH. Weak correlations were observed between haematuria and pH (r = -0.10), bilirubin and ketones (r = 0.14), protein and bilirubin (r = 0.13), and urobilinogen and pH (r = 0.12). A mild positive correlation was found between leukocytes and blood (r = 0.23), reflecting some association between these inflammation markers. In conclusion, the study identified a significant prevalence of schistosomiasis among school-going children in Nelson Mandela Bay Municipality, with S. haematobium detected through haematuria and S. mansoni identified using the POC-CCA assay. The detection of leukocytes and protein in urine samples serves as critical biomarkers for schistosomiasis infections, reinforcing the presence of schistosomiasis in the study area when considered alongside haematuria. These urine parameters are indicative of inflammatory responses associated with schistosomiasis, underscoring the necessity for effective diagnostic methodologies. Such findings highlight the importance of comprehensive diagnostic assessments to accurately identify and monitor schistosomiasis prevalence and its associated health impacts. The significant burden of schistosomiasis in this population highlights the urgent need to develop targeted control interventions to effectively reduce its prevalence in the study area.Keywords: schistosomiasis, urinalysis, haematuria, POC-CCA
Procedia PDF Downloads 1848 Lessons Learned through a Bicultural Approach to Tsunami Education in Aotearoa New Zealand
Authors: Lucy H. Kaiser, Kate Boersen
Abstract:
Kura Kaupapa Māori (kura) and bilingual schools are primary schools in Aotearoa/New Zealand which operate fully or partially under Māori custom and have curricula developed to include Te Reo Māori and Tikanga Māori (Māori language and cultural practices). These schools were established to support Māori children and their families through reinforcing cultural identity by enabling Māori language and culture to flourish in the field of education. Māori kaupapa (values), Mātauranga Māori (Māori knowledge) and Te Reo are crucial considerations for the development of educational resources developed for kura, bilingual and mainstream schools. The inclusion of hazard risk in education has become an important issue in New Zealand due to the vulnerability of communities to a plethora of different hazards. Māori have an extensive knowledge of their local area and the history of hazards which is often not appropriately recognised within mainstream hazard education resources. Researchers from the Joint Centre for Disaster Research, Massey University and East Coast LAB (Life at the Boundary) in Napier were funded to collaboratively develop a toolkit of tsunami risk reduction activities with schools located in Hawke’s Bay’s tsunami evacuation zones. A Māori-led bicultural approach to developing and running the education activities was taken, focusing on creating culturally and locally relevant materials for students and schools as well as giving students a proactive role in making their communities better prepared for a tsunami event. The community-based participatory research is Māori-centred, framed by qualitative and Kaupapa Maori research methodologies and utilizes a range of data collection methods including interviews, focus groups and surveys. Māori participants, stakeholders and the researchers collaborated through the duration of the project to ensure the programme would align with the wider school curricula and kaupapa values. The education programme applied a tuakana/teina, Māori teaching and learning approach in which high school aged students (tuakana) developed tsunami preparedness activities to run with primary school students (teina). At the end of the education programme, high school students were asked to reflect on their participation, what they had learned and what they had enjoyed during the activities. This paper draws on lessons learned throughout this research project. As an exemplar, retaining a bicultural and bilingual perspective resulted in a more inclusive project as there was variability across the students’ levels of confidence using Te Reo and Māori knowledge and cultural frameworks. Providing a range of different learning and experiential activities including waiata (Māori songs), pūrākau (traditional stories) and games was important to ensure students had the opportunity to participate and contribute using a range of different approaches that were appropriate to their individual learning needs. Inclusion of teachers in facilitation also proved beneficial in assisting classroom behavioral management. Lessons were framed by the tikanga and kawa (protocols) of the school to maintain cultural safety for the researchers and the students. Finally, the tuakana/teina component of the education activities became the crux of the programme, demonstrating a path for Rangatahi to support their whānau and communities through facilitating disaster preparedness, risk reduction and resilience.Keywords: school safety, indigenous, disaster preparedness, children, education, tsunami
Procedia PDF Downloads 12047 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program
Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner
Abstract:
Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NYKeywords: accessibility, COVID-19, recovery, testing
Procedia PDF Downloads 19346 Investigating Links in Achievement and Deprivation (ILiAD): A Case Study Approach to Community Differences
Authors: Ruth Leitch, Joanne Hughes
Abstract:
This paper presents the findings of a three-year government-funded study (ILiAD) that aimed to understand the reasons for differential educational achievement within and between socially and economically deprived areas in Northern Ireland. Previous international studies have concluded that there is a positive correlation between deprivation and underachievement. Our preliminary secondary data analysis suggested that the factors involved in educational achievement within multiple deprived areas may be more complex than this, with some areas of high multiple deprivation having high levels of student attainment, whereas other less deprived areas demonstrated much lower levels of student attainment, as measured by outcomes on high stakes national tests. The study proposed that no single explanation or disparate set of explanations could easily account for the linkage between levels of deprivation and patterns of educational achievement. Using a social capital perspective that centralizes the connections within and between individuals and social networks in a community as a valuable resource for educational achievement, the ILiAD study involved a multi-level case study analysis of seven community sites in Northern Ireland, selected on the basis of religious composition (housing areas are largely segregated by religious affiliation), measures of multiple deprivation and differentials in educational achievement. The case study approach involved three (interconnecting) levels of qualitative data collection and analysis - what we have termed Micro (or community/grassroots level) understandings, Meso (or school level) explanations and Macro (or policy/structural) factors. The analysis combines a statistical mapping of factors with qualitative, in-depth data interpretation which, together, allow for deeper understandings of the dynamics and contributory factors within and between the case study sites. Thematic analysis of the qualitative data reveals both cross-cutting factors (e.g. demographic shifts and loss of community, place of the school in the community, parental capacity) and analytic case studies of explanatory factors associated with each of the community sites also permit a comparative element. Issues arising from the qualitative analysis are classified either as drivers or inhibitors of educational achievement within and between communities. Key issues that are emerging as inhibitors/drivers to attainment include: the legacy of the community conflict in Northern Ireland, not least in terms of inter-generational stress, related with substance abuse and mental health issues; differing discourses on notions of ‘community’ and ‘achievement’ within/between community sites; inter-agency and intra-agency levels of collaboration and joined-up working; relationship between the home/school/community triad and; school leadership and school ethos. At this stage, the balance of these factors can be conceptualized in terms of bonding social capital (or lack of it) within families, within schools, within each community, within agencies and also bridging social capital between the home/school/community, between different communities and between key statutory and voluntary organisations. The presentation will outline the study rationale, its methodology, present some cross-cutting findings and use an illustrative case study of the findings from a community site to underscore the importance of attending to community differences when trying to engage in research to understand and improve educational attainment for all.Keywords: educational achievement, multiple deprivation, community case studies, social capital
Procedia PDF Downloads 38745 Mitigating Urban Flooding through Spatial Planning Interventions: A Case of Bhopal City
Authors: Rama Umesh Pandey, Jyoti Yadav
Abstract:
Flooding is one of the waterborne disasters that causes extensive destruction in urban areas. Developing countries are at a higher risk of such damage and more than half of the global flooding events take place in Asian countries including India. Urban flooding is more of a human-induced disaster rather than natural. This is highly influenced by the anthropogenic factors, besides metrological and hydrological causes. Unplanned urbanization and poor management of cities enhance the impact manifold and cause huge loss of life and property in urban areas. It is an irony that urban areas have been facing water scarcity in summers and flooding during monsoon. This paper is an attempt to highlight the factors responsible for flooding in a city especially from an urban planning perspective and to suggest mitigating measures through spatial planning interventions. Analysis has been done in two stages; first is to assess the impacts of previous flooding events and second to analyze the factors responsible for flooding at macro and micro level in cities. Bhopal, a city in Central India having nearly two million population, has been selected for the study. The city has been experiencing flooding during heavy rains in monsoon. The factors responsible for urban flooding were identified through literature review as well as various case studies from different cities across the world and India. The factors thus identified were analyzed for both macro and micro level influences. For macro level, the previous flooding events that have caused huge destructions were analyzed and the most affected areas in Bhopal city were identified. Since the identified area was falling within the catchment of a drain so the catchment area was delineated for the study. The factors analyzed were: rainfall pattern to calculate the return period using Weibull’s formula; imperviousness through mapping in ArcGIS; runoff discharge by using Rational method. The catchment was divided into micro watersheds and the micro watershed having maximum impervious surfaces was selected to analyze the coverage and effect of physical infrastructure such as: storm water management; sewerage system; solid waste management practices. The area was further analyzed to assess the extent of violation of ‘building byelaws’ and ‘development control regulations’ and encroachment over the natural water streams. Through analysis, the study has revealed that the main issues have been: lack of sewerage system; inadequate storm water drains; inefficient solid waste management in the study area; violation of building byelaws through extending building structures ether on to the drain or on the road; encroachments by slum dwellers along or on to the drain reducing the width and capacity of the drain. Other factors include faulty culvert’s design resulting in back water effect. Roads are at higher level than the plinth of houses which creates submersion of their ground floors. The study recommends spatial planning interventions for mitigating urban flooding and strategies for management of excess rain water during monsoon season. Recommendations have also been made for efficient land use management to mitigate water logging in areas vulnerable to flooding.Keywords: mitigating strategies, spatial planning interventions, urban flooding, violation of development control regulations
Procedia PDF Downloads 32844 Empowering and Educating Young People Against Cybercrime by Playing: The Rayuela Method
Authors: Jose L. Diego, Antonio Berlanga, Gregorio López, Diana López
Abstract:
The Rayuela method is a success story, as it is part of a project selected by the European Commission to face the challenge launched by itself for achieving a better understanding of human factors, as well as social and organisational aspects that are able to solve issues in fighting against crime. Rayuela's method specifically focuses on the drivers of cyber criminality, including approaches to prevent, investigate, and mitigate cybercriminal behavior. As the internet has become an integral part of young people’s lives, they are the key target of the Rayuela method because they (as a victim or as a perpetrator) are the most vulnerable link of the chain. Considering the increased time spent online and the control of their internet usage and the low level of awareness of cyber threats and their potential impact, it is understandable the proliferation of incidents due to human mistakes. 51% of Europeans feel not well informed about cyber threats, and 86% believe that the risk of becoming a victim of cybercrime is rapidly increasing. On the other hand, Law enforcement has noted that more and more young people are increasingly committing cybercrimes. This is an international problem that has considerable cost implications; it is estimated that crimes in cyberspace will cost the global economy $445B annually. Understanding all these phenomena drives to the necessity of a shift in focus from sanctions to deterrence and prevention. As a research project, Rayuela aims to bring together law enforcement agencies (LEAs), sociologists, psychologists, anthropologists, legal experts, computer scientists, and engineers, to develop novel methodologies that allow better understanding the factors affecting online behavior related to new ways of cyber criminality, as well as promoting the potential of these young talents for cybersecurity and technologies. Rayuela’s main goal is to better understand the drivers and human factors affecting certain relevant ways of cyber criminality, as well as empower and educate young people in the benefits, risks, and threats intrinsically linked to the use of the Internet by playing, thus preventing and mitigating cybercriminal behavior. In order to reach that goal it´s necessary an interdisciplinary consortium (formed by 17 international partners) carries out researches and actions like Profiling and case studies of cybercriminals and victims, risk assessments, studies on Internet of Things and its vulnerabilities, development of a serious gaming environment, training activities, data analysis and interpretation using Artificial intelligence, testing and piloting, etc. For facilitating the real implementation of the Rayuela method, as a community policing strategy, is crucial to count on a Police Force with a solid background in trust-building and community policing in order to do the piloting, specifically with young people. In this sense, Valencia Local Police is a pioneer Police Force working with young people in conflict solving, through providing police mediation and peer mediation services and advice. As an example, it is an official mediation institution, so agreements signed by their police mediators have once signed by the parties, the value of a judicial decision.Keywords: fight against crime and insecurity, avert and prepare young people against aggression, ICT, serious gaming and artificial intelligence against cybercrime, conflict solving and mediation with young people
Procedia PDF Downloads 12743 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains
Authors: Jing Jin
Abstract:
The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry
Procedia PDF Downloads 6342 Exploring Participatory Research Approaches in Agricultural Settings: Analyzing Pathways to Enhance Innovation in Production
Authors: Michele Paleologo, Marta Acampora, Serena Barello, Guendalina Graffigna
Abstract:
Introduction: In the face of increasing demands for higher agricultural productivity with minimal environmental impact, participatory research approaches emerge as promising means to promote innovation. However, the complexities and ambiguities surrounding these approaches in both theory and practice present challenges. This Scoping Review seeks to bridge these gaps by mapping participatory approaches in agricultural contexts, analyzing their characteristics, and identifying indicators of success. Methods: Following PRISMA guidelines, we conducted a systematic Scoping Review, searching Scopus and Web of Science databases. Our review encompassed 34 projects from diverse geographical regions and farming contexts. Thematic analysis was employed to explore the types of innovation promoted and the categories of participants involved. Results: The identified innovation types encompass technological advancements, sustainable farming practices, and market integration, forming 5 main themes: climate change, cultivar, irrigation, pest and herbicide, and technical improvement. These themes represent critical areas where participatory research drives innovation to address pressing agricultural challenges. Participants were categorized as citizens, experts, NGOs, private companies, and public bodies. Understanding their roles is vital for designing effective participatory initiatives that embrace diverse stakeholders. The review also highlighted 27 theoretical frameworks underpinning participatory projects. Clearer guidelines and reporting standards are crucial for facilitating the comparison and synthesis of findings across studies, thereby enhancing the robustness of future participatory endeavors. Furthermore, we identified three main categories of barriers and facilitators: pragmatic/behavioral, emotional/relational, and cognitive. These insights underscore the significance of participant engagement and collaborative decision-making for project success beyond theoretical considerations. Regarding participation, projects were classified as contributory (5 cases), where stakeholders contributed insights; collaborative (10 cases), with active co-designing of solutions; and co-created (19 cases), featuring deep stakeholder involvement from ideation to implementation, resulting in joint ownership of outcomes. Such diverse participation modes highlight the adaptability of participatory approaches to varying agricultural contexts. Discussion: In conclusion, this Scoping Review demonstrates the potential of participatory research in driving transformative changes in farmers' practices, fostering sustainability and innovation in agriculture. Understanding the diverse landscape of participatory approaches, theoretical frameworks, and participant engagement strategies is essential for designing effective and context-specific interventions. Collaborative efforts among researchers, practitioners, and stakeholders are pivotal in harnessing the full potential of participatory approaches and driving positive change in agricultural settings worldwide. The identified themes of innovation and participation modes provide valuable insights for future research and targeted interventions in agricultural innovation.Keywords: participatory research, co-creation, agricultural innovation, stakeholders' engagement
Procedia PDF Downloads 6341 Understanding Natural Resources Governance in Canada: The Role of Institutions, Interests, and Ideas in Alberta's Oil Sands Policy
Authors: Justine Salam
Abstract:
As a federal state, Canada’s constitutional arrangements regarding the management of natural resources is unique because it gives complete ownership and control of natural resources to the provinces (subnational level). However, the province of Alberta—home to the third largest oil reserves in the world—lags behind comparable jurisdictions in levying royalties on oil corporations, especially oil sands royalties. While Albertans own the oil sands, scholars have argued that natural resource exploitation in Alberta benefits corporations and industry more than it does Albertans. This study provides a systematic understanding of the causal factors affecting royalties in Alberta to map dynamics of power and how they manifest themselves during policy-making. Mounting domestic and global public pressure led Alberta to review its oil sands royalties twice in less than a decade through public-commissioned Royalty Review Panels, first in 2007 and again in 2015. The Panels’ task was to research best practices and to provide policy recommendations to the Government through public consultations with Albertans, industry, non-governmental organizations, and First Nations peoples. Both times, the Panels recommended a relative increase to oil sands royalties. However, irrespective of the Reviews’ recommendations, neither the right-wing 2007 Progressive Conservative Party (PC) nor the left-wing 2015 New Democratic Party (NDP) government—both committed to increase oil sands royalties—increased royalty intake. Why did two consecutive political parties at opposite ends of the political spectrum fail to account for the recommendations put forward by the Panel? Through a qualitative case-study analysis, this study assesses domestic and global causal factors for Alberta’s inability to raise oil sands royalties significantly after the two Reviews through an institutions, interests, and ideas framework. Indeed, causal factors can be global (e.g. market and price fluctuation) or domestic (e.g. oil companies’ influence on the Alberta government). The institutions, interests, and ideas framework is at the intersection of public policy, comparative studies, and political economy literatures, and therefore draws multi-faceted insights into the analysis. To account for institutions, the study proposes to review international trade agreements documents such as the North American Free Trade Agreement (NAFTA) because they have embedded Alberta’s oil sands into American energy security policy and tied Canadian and Albertan oil policy in legal international nods. To account for interests, such as how the oil lobby or the environment lobby can penetrate governmental decision-making spheres, the study draws on the Oil Sands Oral History project, a database of interviews from government officials and oil industry leaders at a pivotal time in Alberta’s oil industry, 2011-2013. Finally, to account for ideas, such as how narratives of Canada as a global ‘energy superpower’ and the importance of ‘energy security’ have dominated and polarized public discourse, the study relies on content analysis of Alberta-based pro-industry newspapers to trace the prevalence of these narratives. By mapping systematically the nods and dynamics of power at play in Alberta, the study sheds light on the factors that influence royalty policy-making in one of the largest industries in Canada.Keywords: Alberta Canada, natural resources governance, oil sands, political economy
Procedia PDF Downloads 13240 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations
Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai
Abstract:
Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile
Procedia PDF Downloads 14139 Microplastic Concentrations and Fluxes in Urban Compartments: A Systemic Approach at the Scale of the Paris Megacity
Authors: Rachid Dris, Robin Treilles, Max Beaurepaire, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Johnny Gasperi, Bruno Tassin
Abstract:
Microplastic sources and fluxes in urban catchments are only poorly studied. Most often, the approaches taken focus on a single source and only carry out a description of the contamination levels and type (shape, size, polymers). In order to gain an improved knowledge of microplastic inputs at urban scales, estimating and comparing various fluxes is necessary. The Laboratoire Eau, Environnement et Systèmes Urbains (LEESU), the Laboratoire Eau Environnement (LEE) and the SIAAP (Service public de l’assainissement francilien) initiated several projects to investigate different urban sources and flows of microplastics. A systemic approach is undertaken at the scale of Paris Megacity, and several compartments are considered, including atmospheric fallout, wastewater treatments plants, runoff and combined sewer overflows. These investigations are carried out within the Limnoplast and OPUR projects. Atmospheric fallout was sampled during consecutive periods ranging from 2 to 3 weeks with a stainless-steel funnel. Both wet and dry periods were considered. Different treatment steps were sampled in 2 wastewater treatment plants (Seine-Amont for activated sludge and Seine-Centre for biofiltration) of the SIAAP, including sludge samples. Microplastics were also investigated in combined sewer overflows as well as in stormwater at the outlet suburban catchment (Sucy-en-Brie, France) during four rain events. Samples are treated using hydroperoxide digestion (H₂O₂ 30 %) in order to reduce organic material. Microplastics are then extracted from the samples with a density separation step using NaI (d=1.6 g.cm⁻³). Samples are filtered on metallic filters with a porosity of 14 µm between steps to separate them from the solutions (H₂O₂ and NaI). The last filtration was carried out on alumina filters. Infrared mapping analysis (using a micro-FTIR with an MCT detector) is performed on each alumina filter. The resulting maps are analyzed using a microplastic analysis software simple, developed by Aalborg University, Denmark and Alfred Wegener Institute, Germany. Blanks were systematically carried out to consider sample contamination. This presentation aims at synthesizing the data found in the various projects. In order to carry out a systemic approach and compare the various inputs, all the data were converted into annual microplastic fluxes (number of microplastics per year), and extrapolated to the Parisian agglomeration. PP, PE and alkyd are the most prevalent polymers found in storm water samples. Rain intensity and microplastic concentrations did not show any clear correlation. Considering the runoff volumes and the impervious surface area of the studied catchment, a flux of 4*107–9*107 MPs.yr⁻¹.ha⁻¹ was estimated. Samples of wastewater treatment plants and atmospheric fallout are currently being analyzed in order to finalize this assessment. The representativeness of such samplings and uncertainties related to the extrapolations will be discussed and gaps in knowledge will be identified. The data provided by such an approach will help to prioritize future research as well as policy efforts.Keywords: microplastics, atmosphere, wastewater, urban runoff, Paris megacity, urban waters
Procedia PDF Downloads 18038 Examining the Behavioral, Hygienic and Expectational Changes in Adolescents and Young Women during COVID-19 Quarantine in Colombia
Authors: Rocio Murad, Marcela Sanchez, Mariana Calderon Jaramillo, Danny Rivera, Angela Cifuentes, Daniela Roldán, Juan Carlos Rivillas
Abstract:
Women and girls have specific health needs, but during health pandemics such as COVID19 they are less likely to have access to quality essential health information, commodities and services, or insurance coverage for routine and catastrophic health expenses, especially in rural and marginalized communities. This is compounded by multiple or intersecting inequalities, such as ethnicity, socioeconomic status, disability, age, geographic location, and sexual orientation, among others. Despite concerted collective action, there is a lack of information on the situation of women, adolescents and youth, including gender inequalities exacerbated by the pandemic. Much more needs to be done to amplify the lived realities of women and adolescents in global and national advocacy and policy responses. The COVID 19 pandemic reflects the need for systematic advocacy policies based on the lived experiences of women and adolescents, underpinned by human rights. This research is part of the initiative of Profamilia Association (Solidarity Study), and its objective is twofold: i) to analyze the behavioral changes and immediate expectations of Colombians during the stage of relaxation of the confinement measures decreed by the national government; and ii) to identify the needs, experiences and resilient practices of adolescents and young women during the COVID-19 crisis in Colombia. Descriptive analysis of data collected by Profamilia through the Solidaridad study, an exploratory cross-sectional descriptive study that used subnational level data from a nonprobabilistic sample survey conducted to 1735 adults, between September 01 and 11, 2020. Interviews were conducted with key stakeholders about their experiences during COVID19, under three key axes: i) main challenges for adolescents and young women; ii) examples of what has worked well in responding to the challenge; and iii) how/what services are/should be provided during COVID-19 (and beyond) to address the challenge. Interviewees were selected based on prior mapping of social groups of interest. In total, 23 adolescents and young women participated in the interviews. The results show that people adopted behavioral changes such as wearing masks, avoiding people with symptoms, and reducing mobility, but there was also a doubling of concerns for many reasons, from effects on mental health, sexual health, and unattended reproductive health to the burden of care and working at home. The favorable perception that people had at the beginning of the quarantine about the response and actions of the national and local government to control Covid-19 decreased over the course of the quarantine. The challenges and needs of adolescents and young women were highlighted during the most restrictive measures to contain the COVID-19 pandemic, which resulted in disruptions to daily activities, education and work, as well as restrictions to mobility and social interaction. Concerns raised by participants included: impact on mental health and wellbeing due to disruption of daily life; limitations in access to formal and informal education; food insecurity; migration; loss of livelihoods; lack of access to health information and services; limitations to sexual and reproductive health and rights; insecurity problems; and problems in communication and treatment among household members.Keywords: COVID-19, changes in behavior, adolescents, women
Procedia PDF Downloads 10737 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs
Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu
Abstract:
This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network
Procedia PDF Downloads 6236 Aquaporin-1 as a Differential Marker in Toxicant-Induced Lung Injury
Authors: Ekta Yadav, Sukanta Bhattacharya, Brijesh Yadav, Ariel Hus, Jagjit Yadav
Abstract:
Background and Significance: Respiratory exposure to toxicants (chemicals or particulates) causes disruption of lung homeostasis leading to lung toxicity/injury manifested as pulmonary inflammation, edema, and/or other effects depending on the type and extent of exposure. This emphasizes the need for investigating toxicant type-specific mechanisms to understand therapeutic targets. Aquaporins, aka water channels, are known to play a role in lung homeostasis. Particularly, the two major lung aquaporins AQP5 and AQP1 expressed in alveolar epithelial and vasculature endothelia respectively allow for movement of the fluid between the alveolar air space and the associated vasculature. In view of this, the current study is focused on understanding the regulation of lung aquaporins and other targets during inhalation exposure to toxic chemicals (Cigarette smoke chemicals) versus toxic particles (Carbon nanoparticles) or co-exposures to understand their relevance as markers of injury and intervention. Methodologies: C57BL/6 mice (5-7 weeks old) were used in this study following an approved protocol by the University of Cincinnati Institutional Animal Care and Use Committee (IACUC). The mice were exposed via oropharyngeal aspiration to multiwall carbon nanotube (MWCNT) particles suspension once (33 ugs/mouse) followed by housing for four weeks or to Cigarette smoke Extract (CSE) using a daily dose of 30µl/mouse for four weeks, or to co-exposure using the combined regime. Control groups received vehicles following the same dosing schedule. Lung toxicity/injury was assessed in terms of homeostasis changes in the lung tissue and lumen. Exposed lungs were analyzed for transcriptional expression of specific targets (AQPs, surfactant protein A, Mucin 5b) in relation to tissue homeostasis. Total RNA from lungs extracted using TRIreagent kit was analyzed using qRT-PCR based on gene-specific primers. Total protein in bronchoalveolar lavage (BAL) fluid was determined by the DC protein estimation kit (BioRad). GraphPad Prism 5.0 (La Jolla, CA, USA) was used for all analyses. Major findings: CNT exposure alone or as co-exposure with CSE increased the total protein content in the BAL fluid (lung lumen rinse), implying compromised membrane integrity and cellular infiltration in the lung alveoli. In contrast, CSE showed no significant effect. AQP1, required for water transport across membranes of endothelial cells in lungs, was significantly upregulated in CNT exposure but downregulated in CSE exposure and showed an intermediate level of expression for the co-exposure group. Both CNT and CSE exposures had significant downregulating effects on Muc5b, and SP-A expression and the co-exposure showed either no significant effect (Muc5b) or significant downregulating effect (SP-A), suggesting an increased propensity for infection in the exposed lungs. Conclusions: The current study based on the lung toxicity mouse model showed that both toxicant types, particles (CNT) versus chemicals (CSE), cause similar downregulation of lung innate defense targets (SP-A, Muc5b) and mostly a summative effect when presented as co-exposure. However, the two toxicant types show differential induction of aquaporin-1 coinciding with the corresponding differential damage to alveolar integrity (vascular permeability). Interestingly, this implies the potential of AQP1 as a differential marker of toxicant type-specific lung injury.Keywords: aquaporin, gene expression, lung injury, toxicant exposure
Procedia PDF Downloads 18335 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques
Authors: Stefan K. Behfar
Abstract:
The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing
Procedia PDF Downloads 7534 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis
Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski
Abstract:
The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.Keywords: cloud service, geodata cube, multiresolution, raster geodata
Procedia PDF Downloads 13333 Development of a Mixed-Reality Hands-Free Teleoperated Robotic Arm for Construction Applications
Authors: Damith Tennakoon, Mojgan Jadidi, Seyedreza Razavialavi
Abstract:
With recent advancements of automation in robotics, from self-driving cars to autonomous 4-legged quadrupeds, one industry that has been stagnant is the construction industry. The methodologies used in a modern-day construction site consist of arduous physical labor and the use of heavy machinery, which has not changed over the past few decades. The dangers of a modern-day construction site affect the health and safety of the workers due to performing tasks such as lifting and moving heavy objects and having to maintain unhealthy posture to complete repetitive tasks such as painting, installing drywall, and laying bricks. Further, training for heavy machinery is costly and requires a lot of time due to their complex control inputs. The main focus of this research is using immersive wearable technology and robotic arms to perform the complex and intricate skills of modern-day construction workers while alleviating the physical labor requirements to perform their day-to-day tasks. The methodology consists of mounting a stereo vision camera, the ZED Mini by Stereolabs, onto the end effector of an industrial grade robotic arm, streaming the video feed into the Virtual Reality (VR) Meta Quest 2 (Quest 2) head-mounted display (HMD). Due to the nature of stereo vision, and the similar field-of-views between the stereo camera and the Quest 2, human-vision can be replicated on the HMD. The main advantage this type of camera provides over a traditional monocular camera is it gives the user wearing the HMD a sense of the depth of the camera scene, specifically, a first-person view of the robotic arm’s end effector. Utilizing the built-in cameras of the Quest 2 HMD, open-source hand-tracking libraries from OpenXR can be implemented to track the user’s hands in real-time. A mixed-reality (XR) Unity application can be developed to localize the operator's physical hand motions with the end-effector of the robotic arm. Implementing gesture controls will enable the user to move the robotic arm and control its end-effector by moving the operator’s arm and providing gesture inputs from a distant location. Given that the end effector of the robotic arm is a gripper tool, gripping and opening the operator’s hand will translate to the gripper of the robot arm grabbing or releasing an object. This human-robot interaction approach provides many benefits within the construction industry. First, the operator’s safety will be increased substantially as they can be away from the site-location while still being able perform complex tasks such as moving heavy objects from place to place or performing repetitive tasks such as painting walls and laying bricks. The immersive interface enables precision robotic arm control and requires minimal training and knowledge of robotic arm manipulation, which lowers the cost for operator training. This human-robot interface can be extended to many applications, such as handling nuclear accident/waste cleanup, underwater repairs, deep space missions, and manufacturing and fabrication within factories. Further, the robotic arm can be mounted onto existing mobile robots to provide access to hazardous environments, including power plants, burning buildings, and high-altitude repair sites.Keywords: construction automation, human-robot interaction, hand-tracking, mixed reality
Procedia PDF Downloads 7832 Transcending Boundaries: Integrating Urban Vibrancy with Contemporary Interior Design through Vivid Wall Pieces
Authors: B. C. Biermann
Abstract:
This in-depth exploration investigates the transformative integration of urban vibrancy into contemporary interior design through the strategic incorporation of vivid wall pieces. Bridging the gap between public dynamism and private tranquility, this study delves into the nuanced methodologies, creative processes, and profound impacts of this innovative approach. Drawing inspiration from street art's dynamic language and the timeless allure of natural beauty, these artworks serve as conduits, orchestrating a dialogue that challenges traditional boundaries and redefines the relationship between external chaos and internal sanctuaries. The fusion of urban vibrancy with contemporary interior design represents a paradigm shift, where the inherent dynamism of public spaces harmoniously converges with the curated tranquility of private environments. This paper aims to explore the underlying principles, creative processes, and transformative impacts of integrating vivid wall pieces as instruments for bringing the "outside in." Employing an innovative and meticulous methodology, street art elements are synthesized with the refined aesthetics of contemporary design. This delicate balance necessitates a nuanced understanding of both artistic realms, ensuring a synthesis that captures the essence of urban energy while seamlessly blending with the sophistication of modern interior design. The creative process involves a strategic selection of street art motifs, colors, and textures that resonate with the organic beauty found in natural landscapes, creating a symbiotic relationship between the grittiness of the streets and the elegance of interior spaces. This groundbreaking approach defies traditional boundaries by integrating dynamic street art into interior spaces, blurring the demarcation between external chaos and internal tranquility. Vivid wall pieces serve as dynamic focal points, transforming physical spaces and challenging conventional perceptions of where art belongs. This redefinition asserts that boundaries are fluid and meant to be transcended. Case studies illustrate the profound impact of integrating vivid wall pieces on the aesthetic appeal of interior spaces. Urban vibrancy revitalizes the atmosphere, infusing it with palpable energy that resonates with the vivacity of public spaces. The curated tranquility of private interiors coexists harmoniously with the dynamic visual language of street art, fostering a unique and evolving relationship between inhabitants and their living spaces. Emphasizing harmonious coexistence, the paper underscores the potential for a seamless dialogue between public urban spaces and private interiors. The integration of vivid wall pieces acts as a bridge rather than a dichotomy, merging the dynamism of street art with the curated elegance of contemporary design. This unique visual tapestry transcends traditional categorizations, fostering a symbiotic relationship between contrasting worlds. In conclusion, this paper posits that the integration of vivid wall pieces represents a transformative tool for contemporary interior design, challenging and redefining conventional boundaries. By strategically bringing the "outside in," this approach transforms interior spaces and heralds a paradigm shift in the relationship between urban aesthetics and contemporary living. The ongoing narrative between urban vibrancy and interior design creates spaces that reflect the dynamic and ever-evolving nature of the surrounding environment.Keywords: Art Integration, Contemporary Interior Design, Interior Space Transformation, Vivid Wall Pieces
Procedia PDF Downloads 7931 Sustainable Urban Regenaration the New Vocabulary and the Timless Grammar of the Urban Tissue
Authors: Ruth Shapira
Abstract:
Introduction: The rapid urbanization of the last century confronts planners, regulatory bodies, developers and most of all the public with seemingly unsolved conflicts regarding values, capital, and wellbeing of the built and un-built urban space. There is an out of control change of scale of the urban form and of the rhythm of the urban life which has known no significant progress in the last 2-3 decades despite the on-growing urban population. It is the objective of this paper to analyze some of these fundamental issues through the case study of a relatively small town in the center of Israel (Kiryat-Ono, 36,000 inhabitants), unfold the deep structure of qualities versus disruptors, present some cure that we have developed to bridge over and humbly suggest a practice that may bring about a sustainable new urban environment based on timeless values of the past, an approach that can be generic for similar cases. Basic Methodologies:The object, the town of Kiryat Ono, shall be experimented upon in a series of four action processes: De-composition, Re-composition, the Centering process and, finally, Controlled Structural Disintegration. Each stage will be based on facts, analysis of previous multidisciplinary interventions on various layers – and the inevitable reaction of the OBJECT, leading to the conclusion based on innovative theoretical and practical methods that we have developed and that we believe are proper for the open ended network, setting the rules for the contemporary urban society to cluster by – thus – a new urban vocabulary based on the old structure of times passed. The Study: Kiryat Ono, was founded 70 years ago as an agricultural settlement and rapidly turned into an urban entity. In spite the massive intensification, the original DNA of the old small town was still deeply embedded, mostly in the quality of the public space and in the sense of clustered communities. In the past 20 years, the recent demand for housing has been addressed to on the national level with recent master plans and urban regeneration policies mostly encouraging individual economic initiatives. Unfortunately, due to the obsolete existing planning platform the present urban renewal is characterized by pressure of developers, a dramatic change in building scale and widespread disintegration of the existing urban and social tissue.Our office was commissioned to conceptualize two master plans for the two contradictory processes of Kiryat Ono’s future: intensification and conservation. Following a comprehensive investigation into the deep structures and qualities of the existing town, we developed a new vocabulary of conservation terms thus redefying the sense of PLACE. The main challenge was to create master plans that should offer a regulatory basis to the accelerated and sporadic development providing for the public good and preserving the characteristics of the place consisting of a tool box of design guidelines that will have the ability to reorganize space along the time axis in a sustainable way. In conclusion: The system of rules that we have developed can generate endless possible patterns making sure that at each implementation fragment an event is created, and a better place is revealed. It takes time and perseverance but it seems to be the way to provide a healthy and sustainable framework for the accelerated urbanization of our chaotic present.Keywords: sustainable urban design, intensification, emergent urban patterns, sustainable housing, compact urban neighborhoods, sustainable regeneration, restoration, complexity, uncertainty, need for change, implications of legislation on local planning
Procedia PDF Downloads 38830 A Qualitative Exploration of the Sexual and Reproductive Health Practices of Adolescent Mothers from Indigenous Populations in Ratanak Kiri Province, Cambodia
Authors: Bridget J. Kenny, Elizabeth Hoban, Jo Williams
Abstract:
Adolescent pregnancy presents a significant public health challenge for Cambodia. Despite declines in the overall fertility rate, the adolescent fertility rate is increasing. Adolescent pregnancy is particularly problematic in the Northeast provinces of Ratanak Kiri and Mondul Kiri where 34 percent of girls aged between 15 and 19 have begun childbearing; this is almost three times Cambodia’s national average of 12 percent. Language, cultural and geographic barriers have restricted qualitative exploration of the sexual and reproductive health (SRH) challenges that face indigenous adolescents in Northeast Cambodia. The current study sought to address this gap by exploring the SRH practices of adolescent mothers from indigenous populations in Ratanak Kiri Province. Twenty-two adolescent mothers, aged between 15 and 19, were recruited from seven indigenous villages in Ratanak Kiri Province and asked to participate in a combined body mapping exercise and semi-structured interview. Participants were given a large piece of paper (59.4 x 84.1 cm) with the outline of a female body and asked to draw the female reproductive organs onto the ‘body map’. Participants were encouraged to explain what they had drawn with the purpose of evoking conversation about their reproductive bodies. Adolescent mothers were then invited to participate in a semi-structured interview to further expand on topics of SRH. The qualitative approach offered an excellent avenue to explore the unique SRH challenges that face indigenous adolescents in rural Cambodia. In particular, the use of visual data collection methods reduced the language and cultural barriers that have previously restricted or prevented qualitative exploration of this population group. Thematic analysis yielded six major themes: (1) understanding of the female reproductive body, (2) contraceptive knowledge, (3) contraceptive use, (4) barriers to contraceptive use, (5) sexual practices, (6) contact with healthcare facilities. Participants could name several modern contraceptive methods and knew where they could access family planning services. However, adolescent mothers explained that they gained this knowledge during antenatal care visits and consequently participants had limited SRH knowledge, including contraceptive awareness, at the time of sexual initiation. Fear of the perceived side effects of modern contraception, including infertility, provided an additional barrier to contraceptive use for indigenous adolescents. Participants did not cite cost or geographic isolation as barriers to accessing SRH services. Child marriage and early sexual initiation were also identified as important factors contributing to the high prevalence of adolescent pregnancy in this population group. The findings support the Ministry of Education, Youth and Sports' (MoEYS) recent introduction of SRH education into the primary and secondary school curriculum but suggest indigenous girls in rural Cambodia require additional sources of SRH information. Results indicate adolescent girls’ first point of contact with healthcare facilities occurs after they become pregnant. Promotion of an effective continuum of care by increasing access to healthcare services during the pre-pregnancy period is suggested as a means of providing adolescents girls with an additional avenue to acquire SRH information.Keywords: adolescent pregnancy, contraceptive use, family planning, sexual and reproductive health
Procedia PDF Downloads 11129 Photosynthesis Metabolism Affects Yield Potentials in Jatropha curcas L.: A Transcriptomic and Physiological Data Analysis
Authors: Nisha Govender, Siju Senan, Zeti-Azura Hussein, Wickneswari Ratnam
Abstract:
Jatropha curcas, a well-described bioenergy crop has been extensively accepted as future fuel need especially in tropical regions. Ideal planting material required for large-scale plantation is still lacking. Breeding programmes for improved J. curcas varieties are rendered difficult due to limitations in genetic diversity. Using a combined transcriptome and physiological data, we investigated the molecular and physiological differences in high and low yielding Jatropha curcas to address plausible heritable variations underpinning these differences, in regard to photosynthesis, a key metabolism affecting yield potentials. A total of 6 individual Jatropha plant from 4 accessions described as high and low yielding planting materials were selected from the Experimental Plot A, Universiti Kebangsaan Malaysia (UKM), Bangi. The inflorescence and shoots were collected for transcriptome study. For the physiological study, each individual plant (n=10) from the high and low yielding populations were screened for agronomic traits, chlorophyll content and stomatal patterning. The J. curcas transcriptomes are available under BioProject PRJNA338924 and BioSample SAMN05827448-65, respectively Each transcriptome was subjected to functional annotation analysis of sequence datasets using the BLAST2Go suite; BLASTing, mapping, annotation, statistical analysis and visualization Large-scale phenotyping of the number of fruits per plant (NFPP) and fruits per inflorescence (FPI) classified the high yielding Jatropha accessions with average NFPP =60 and FPI > 10, whereas the low yielding accessions yielded an average NFPP=10 and FPI < 5. Next generation sequencing revealed genes with differential expressions in the high yielding Jatropha relative to the low yielding plants. Distinct differences were observed in transcript level associated to photosynthesis metabolism. DEGs collection in the low yielding population showed comparable CAM photosynthetic metabolism and photorespiration, evident as followings: phosphoenolpyruvate phosphate translocator chloroplastic like isoform with 2.5 fold change (FC) and malate dehydrogenase (2.03 FC). Green leaves have the most pronounced photosynthetic activity in a plant body due to significant accumulation of chloroplast. In most plants, the leaf is always the dominant photosynthesizing heart of the plant body. Large number of the DEGS in the high-yielding population were found attributable to chloroplast and chloroplast associated events; STAY-GREEN chloroplastic, Chlorophyllase-1-like (5.08 FC), beta-amylase (3.66 FC), chlorophyllase-chloroplastic-like (3.1 FC), thiamine thiazole chloroplastic like (2.8 FC), 1-4, alpha glucan branching enzyme chloroplastic amyliplastic (2.6FC), photosynthetic NDH subunit (2.1 FC) and protochlorophyllide chloroplastic (2 FC). The results were parallel to a significant increase in chlorophyll a content in the high yielding population. In addition to the chloroplast associated transcript abundance, the TOO MANY MOUTHS (TMM) at 2.9 FC, which code for distant stomatal distribution and patterning in the high-yielding population may explain high concentration of CO2. The results were in agreement with the role of TMM. Clustered stomata causes back diffusion in the presence of gaps localized closely to one another. We conclude that high yielding Jatropha population corresponds to a collective function of C3 metabolism with a low degree of CAM photosynthetic fixation. From the physiological descriptions, high chlorophyll a content and even distribution of stomata in the leaf contribute to better photosynthetic efficiency in the high yielding Jatropha compared to the low yielding population.Keywords: chlorophyll, gene expression, genetic variation, stomata
Procedia PDF Downloads 23828 Utilizing Extended Reality in Disaster Risk Reduction Education: A Scoping Review
Authors: Stefano Scippo, Damiana Luzzi, Stefano Cuomo, Maria Ranieri
Abstract:
Background: In response to the rise in natural disasters linked to climate change, numerous studies on Disaster Risk Reduction Education (DRRE) have emerged since the '90s, mainly using a didactic transmission-based approach. Effective DRRE should align with an interactive, experiential, and participatory educational model, which can be costly and risky. A potential solution is using simulations facilitated by eXtended Reality (XR). Research Question: This study aims to conduct a scoping review to explore educational methodologies that use XR to enhance knowledge among teachers, students, and citizens about environmental risks, natural disasters (including climate-related ones), and their management. Method: A search string of 66 keywords was formulated, spanning three domains: 1) education and target audience, 2) environment and natural hazards, and 3) technologies. On June 21st, 2023, the search string was used across five databases: EBSCOhost, IEEE Xplore, PubMed, Scopus, and Web of Science. After deduplication and removing papers without abstracts, 2,152 abstracts (published between 2013 and 2023) were analyzed and 2,062 papers were excluded, followed by the exclusion of 56 papers after full-text scrutiny. Excluded studies focused on unrelated technologies, non-environmental risks, and lacked educational outcomes or accessible texts. Main Results: The 34 reviewed papers were analyzed for context, risk type, research methodology, learning objectives, XR technology use, outcomes, and educational affordances of XR. Notably, since 2016, there has been a rise in scientific publications, focusing mainly on seismic events (12 studies) and floods (9), with a significant contribution from Asia (18 publications), particularly Japan (7 studies). Methodologically, the studies were categorized into empirical (26) and non-empirical (8). Empirical studies involved user or expert validation of XR tools, while non-empirical studies included systematic reviews and theoretical proposals without experimental validation. Empirical studies were further classified into quantitative, qualitative, or mixed-method approaches. Six qualitative studies involved small groups of users or experts, while 20 quantitative or mixed-method studies used seven different research designs, with most (17) employing a quasi-experimental, one-group post-test design, focusing on XR technology usability over educational effectiveness. Non-experimental studies had methodological limitations, making their results hypothetical and in need of further empirical validation. Educationally, the learning objectives centered on knowledge and skills for surviving natural disaster emergencies. All studies recommended XR technologies for simulations or serious games but did not develop comprehensive educational frameworks around these tools. XR-based tools showed potential superiority over traditional methods in teaching risk and emergency management skills. However, conclusions were more valid in studies with experimental designs; otherwise, they remained hypothetical without empirical evidence. The educational affordances of XR, mainly user engagement, were confirmed by the studies. Authors’ Conclusions: The analyzed literature lacks specific educational frameworks for XR in DRRE, focusing mainly on survival knowledge and skills. There is a need to expand educational approaches to include uncertainty education, developing competencies that encompass knowledge, skills, and attitudes like risk perception.Keywords: disaster risk reduction education, educational technologies, scoping review, XR technologies
Procedia PDF Downloads 2227 Production of Insulin Analogue SCI-57 by Transient Expression in Nicotiana benthamiana
Authors: Adriana Muñoz-Talavera, Ana Rosa Rincón-Sánchez, Abraham Escobedo-Moratilla, María Cristina Islas-Carbajal, Miguel Ángel Gómez-Lim
Abstract:
The highest rates of diabetes incidence and prevalence worldwide will increase the number of diabetic patients requiring insulin or insulin analogues. Then, current production systems would not be sufficient to meet the future market demands. Therefore, developing efficient expression systems for insulin and insulin analogues are needed. In addition, insulin analogues with better pharmacokinetics and pharmacodynamics properties and without mitogenic potential will be required. SCI-57 (single chain insulin-57) is an insulin analogue having 10 times greater affinity to the insulin receptor, higher resistance to thermal degradation than insulin, native mitogenicity and biological effect. Plants as expression platforms have been used to produce recombinant proteins because of their advantages such as cost-effectiveness, posttranslational modifications, absence of human pathogens and high quality. Immunoglobulin production with a yield of 50% has been achieved by transient expression in Nicotiana benthamiana (Nb). The aim of this study is to produce SCI-57 by transient expression in Nb. Methodology: DNA sequence encoding SCI-57 was cloned in pICH31070. This construction was introduced into Agrobacterium tumefaciens by electroporation. The resulting strain was used to infiltrate leaves of Nb. In order to isolate SCI-57, leaves from transformed plants were incubated 3 hours with the extraction buffer therefore filtrated to remove solid material. The resultant protein solution was subjected to anion exchange chromatography on an FPLC system and ultrafiltration to purify SCI-57. Detection of SCI-57 was made by electrophoresis pattern (SDS-PAGE). Protein band was digested with trypsin and the peptides were analyzed by Liquid chromatography tandem-mass spectrometry (LC-MS/MS). A purified protein sample (20µM) was analyzed by ESI-Q-TOF-MS to obtain the ionization pattern and the exact molecular weight determination. Chromatography pattern and impurities detection were performed using RP-HPLC using recombinant insulin as standard. The identity of the SCI-57 was confirmed by anti-insulin ELISA. The total soluble protein concentration was quantified by Bradford assay. Results: The expression cassette was verified by restriction mapping (5393 bp fragment). The SDS-PAGE of crude leaf extract (CLE) of transformed plants, revealed a protein of about 6.4 kDa, non-present in CLE of untransformed plants. The LC-MS/MS results displayed one peptide with a high score that matches SCI-57 amino acid sequence in the sample, confirming the identity of SCI-57. From the purified SCI-57 sample (PSCI-57) the most intense charge state was 1069 m/z (+6) on the displayed ionization pattern corresponding to the molecular weight of SCI-57 (6412.6554 Da). The RP-HPLC of the PSCI-57 shows the presence of a peak with similar retention time (rt) and UV spectroscopic profile to the insulin standard (SCI-57 rt=12.96 and insulin rt=12.70 min). The collected SCI-57 peak had ELISA signal. The total protein amount in CLE from transformed plants was higher compared to untransformed plants. Conclusions: Our results suggest the feasibility to produce insulin analogue SCI-57 by transient expression in Nicotiana benthamiana. Further work is being undertaken to evaluate the biological activity by glucose uptake by insulin-sensitive and insulin-resistant murine and human cultured adipocytes.Keywords: insulin analogue, mass spectrometry, Nicotiana benthamiana, transient expression
Procedia PDF Downloads 34826 Electronic Raman Scattering Calibration for Quantitative Surface-Enhanced Raman Spectroscopy and Improved Biostatistical Analysis
Authors: Wonil Nam, Xiang Ren, Inyoung Kim, Masoud Agah, Wei Zhou
Abstract:
Despite its ultrasensitive detection capability, surface-enhanced Raman spectroscopy (SERS) faces challenges as a quantitative biochemical analysis tool due to the significant dependence of local field intensity in hotspots on nanoscale geometric variations of plasmonic nanostructures. Therefore, despite enormous progress in plasmonic nanoengineering of high-performance SERS devices, it is still challenging to quantitatively correlate the measured SERS signals with the actual molecule concentrations at hotspots. A significant effort has been devoted to developing SERS calibration methods by introducing internal standards. It has been achieved by placing Raman tags at plasmonic hotspots. Raman tags undergo similar SERS enhancement at the same hotspots, and ratiometric SERS signals for analytes of interest can be generated with reduced dependence on geometrical variations. However, using Raman tags still faces challenges for real-world applications, including spatial competition between the analyte and tags in hotspots, spectral interference, laser-induced degradation/desorption due to plasmon-enhanced photochemical/photothermal effects. We show that electronic Raman scattering (ERS) signals from metallic nanostructures at hotspots can serve as the internal calibration standard to enable quantitative SERS analysis and improve biostatistical analysis. We perform SERS with Au-SiO₂ multilayered metal-insulator-metal nano laminated plasmonic nanostructures. Since the ERS signal is proportional to the volume density of electron-hole occupation in hotspots, the ERS signals exponentially increase when the wavenumber is approaching the zero value. By a long-pass filter, generally used in backscattered SERS configurations, to chop the ERS background continuum, we can observe an ERS pseudo-peak, IERS. Both ERS and SERS processes experience the |E|⁴ local enhancements during the excitation and inelastic scattering transitions. We calibrated IMRS of 10 μM Rhodamine 6G in solution by IERS. The results show that ERS calibration generates a new analytical value, ISERS/IERS, insensitive to variations from different hotspots and thus can quantitatively reflect the molecular concentration information. Given the calibration capability of ERS signals, we performed label-free SERS analysis of living biological systems using four different breast normal and cancer cell lines cultured on nano-laminated SERS devices. 2D Raman mapping over 100 μm × 100 μm, containing several cells, was conducted. The SERS spectra were subsequently analyzed by multivariate analysis using partial least square discriminant analysis. Remarkably, after ERS calibration, MCF-10A and MCF-7 cells are further separated while the two triple-negative breast cancer cells (MDA-MB-231 and HCC-1806) are more overlapped, in good agreement with the well-known cancer categorization regarding the degree of malignancy. To assess the strength of ERS calibration, we further carried out a drug efficacy study using MDA-MB-231 and different concentrations of anti-cancer drug paclitaxel (PTX). After ERS calibration, we can more clearly segregate the control/low-dosage groups (0 and 1.5 nM), the middle-dosage group (5 nM), and the group treated with half-maximal inhibitory concentration (IC50, 15 nM). Therefore, we envision that ERS calibrated SERS can find crucial opportunities in label-free molecular profiling of complicated biological systems.Keywords: cancer cell drug efficacy, plasmonics, surface-enhanced Raman spectroscopy (SERS), SERS calibration
Procedia PDF Downloads 13525 Contribution of Research to Innovation Management in the Traditional Fruit Production
Authors: Camille Aouinaït, Danilo Christen, Christoph Carlen
Abstract:
Introduction: Small and Medium-sized Enterprises (SMEs) are facing different challenges such as pressures on environmental resources, the rise of downstream power, and trade liberalization. Remaining competitive by implementing innovations and engaging in collaborations could be a strategic solution. In Switzerland, the Federal Institute for Research in Agriculture (Agroscope), the Federal schools of technology (EPFL and ETHZ), Cantonal universities and Universities of Applied Sciences (UAS) can provide substantial inputs. UAS were developed with specific missions to match the labor markets and society needs. Research projects produce patents, publications and improved networks of scientific expertise. The study’s goal is to measure the contribution of UAS and research organization to innovation and the impact of collaborations with partners in the non-academic environment in Swiss traditional fruit production. Materials and methods: The European projects Traditional Food Network to improve the transfer of knowledge for innovation (TRAFOON) and Social Impact Assessment of Productive Interactions between science and society (SIAMPI) frame the present study. The former aims to fill the gap between the needs of traditional food producing SMEs and innovations implemented following European projects. The latter developed a method to assess the impacts of scientific research. On one side, interviews with market players have been performed to make an inventory of needs of Swiss SMEs producing apricots and berries. The participative method allowed matching the current needs and the existing innovations coming from past European projects. Swiss stakeholders (e.g. producers, retailers, an inter-branch organization of fruits and vegetables) directly rated the needs on a five-Likert scale. To transfer the knowledge to SMEs, training workshops have been organized for apricot and berries actors separately, on specific topics. On the other hand, a mapping of a social network is drawn to characterize the links between actors, with a focus on the Swiss canton of Valais and UAS Valais Wallis. Type and frequency of interactions among actors have identified thanks to interviews. Preliminary results: A list of 369 SMEs needs grouped in 22 categories was produced with 37 fulfilled questionnaires. Swiss stakeholders rated 31 needs very important. Training workshops on apricot are focusing on varietal innovations, storage, disease (bacterial blight), pest (Drosophila suzukii), sorting and rootstocks. Entrepreneurship was targeted through trademark discussions in berry production. The UAS Valais Wallis collaborated on a few projects with Agroscope along with industries, at European and national levels. Political and public bodies interfere with the central area of agricultural vulgarization that induces close relationships between the research and the practical side. Conclusions: The needs identified by Swiss stakeholders are becoming part of training workshops to incentivize innovations. The UAS Valais Wallis takes part in collaboration projects with the research environment and market players that bring innovations helping SMEs in their contextual environment. Then, a Strategic Research and Innovation Agenda will be created in order to pursue research and answer the issues facing by SMEs.Keywords: agriculture, innovation, knowledge transfer, university and research collaboration
Procedia PDF Downloads 39324 The Path to Ruthium: Insights into the Creation of a New Element
Authors: Goodluck Akaoma Ordu
Abstract:
Ruthium (Rth) represents a theoretical superheavy element with an atomic number of 119, proposed within the context of advanced materials science and nuclear physics. The conceptualization of Rth involves theoretical frameworks that anticipate its atomic structure, including a hypothesized stable isotope, Rth-320, characterized by 119 protons and 201 neutrons. The synthesis of Ruthium (Rth) hinges on intricate nuclear fusion processes conducted in state-of-the-art particle accelerators, notably utilizing Calcium-48 (Ca-48) as a projectile nucleus and Einsteinium-253 (Es-253) as a target nucleus. These experiments aim to induce fusion reactions that yield Ruthium isotopes, such as Rth-301, accompanied by neutron emission. Theoretical predictions outline various physical and chemical properties attributed to Ruthium (Rth). It is envisaged to possess a high density, estimated at around 25 g/cm³, with melting and boiling points anticipated to be exceptionally high, approximately 4000 K and 6000 K, respectively. Chemical studies suggest potential oxidation states of +2, +3, and +4, indicating a versatile reactivity, particularly with halogens and chalcogens. The atomic structure of Ruthium (Rth) is postulated to feature an electron configuration of [Rn] 5f^14 6d^10 7s^2 7p^2, reflecting its position in the periodic table as a superheavy element. However, the creation and study of superheavy elements like Ruthium (Rth) pose significant challenges. These elements typically exhibit very short half-lives, posing difficulties in their stabilization and detection. Research efforts are focused on identifying the most stable isotopes of Ruthium (Rth) and developing advanced detection methodologies to confirm their existence and properties. Specialized detectors are essential in observing decay patterns unique to Ruthium (Rth), such as alpha decay or fission signatures, which serve as key indicators of its presence and characteristics. The potential applications of Ruthium (Rth) span across diverse technological domains, promising innovations in energy production, material strength enhancement, and sensor technology. Incorporating Ruthium (Rth) into advanced energy systems, such as the Arc Reactor concept, could potentially amplify energy output efficiencies. Similarly, integrating Ruthium (Rth) into structural materials, exemplified by projects like the NanoArc gauntlet, could bolster mechanical properties and resilience. Furthermore, Ruthium (Rth)--based sensors hold promise for achieving heightened sensitivity and performance in various sensing applications. Looking ahead, the study of Ruthium (Rth) represents a frontier in both fundamental science and applied research. It underscores the quest to expand the periodic table and explore the limits of atomic stability and reactivity. Future research directions aim to delve deeper into Ruthium (Rth)'s atomic properties under varying conditions, paving the way for innovations in nanotechnology, quantum materials, and beyond. The synthesis and characterization of Ruthium (Rth) stand as a testament to human ingenuity and technological advancement, pushing the boundaries of scientific understanding and engineering capabilities. In conclusion, Ruthium (Rth) embodies the intersection of theoretical speculation and experimental pursuit in the realm of superheavy elements. It symbolizes the relentless pursuit of scientific excellence and the potential for transformative technological breakthroughs. As research continues to unravel the mysteries of Ruthium (Rth), it holds the promise of reshaping materials science and opening new frontiers in technological innovation.Keywords: superheavy element, nuclear fusion, bombardment, particle accelerator, nuclear physics, particle physics
Procedia PDF Downloads 3523 Policies for Circular Bioeconomy in Portugal: Barriers and Constraints
Authors: Ana Fonseca, Ana Gouveia, Edgar Ramalho, Rita Henriques, Filipa Figueiredo, João Nunes
Abstract:
Due to persistent climate pressures, there is a need to find a resilient economic system that is regenerative in nature. Bioeconomy offers the possibility of replacing non-renewable and non-biodegradable materials derived from fossil fuels with ones that are renewable and biodegradable, while a Circular Economy aims at sustainable and resource-efficient operations. The term "Circular Bioeconomy", which can be summarized as all activities that transform biomass for its use in various product streams, expresses the interaction between these two ideas. Portugal has a very favourable context to promote a Circular Bioeconomy due to its variety of climates and ecosystems, availability of biologically based resources, location, and geomorphology. Recently, there have been political and legislative efforts to develop the Portuguese Circular Bioeconomy. The Action Plan for a Sustainable Bioeconomy, approved in 2021, is composed of five axes of intervention, ranging from sustainable production and the use of regionally based biological resources to the development of a circular and sustainable bioindustry through research and innovation. However, as some statistics show, Portugal is still far from achieving circularity. According to Eurostat, Portugal has circularity rates of 2.8%, which is the second lowest among the member states of the European Union. Some challenges contribute to this scenario, including sectorial heterogeneity and fragmentation, prevalence of small producers, lack of attractiveness for younger generations, and absence of implementation of collaborative solutions amongst producers and along value chains.Regarding the Portuguese industrial sector, there is a tendency towards complex bureaucratic processes, which leads to economic and financial obstacles and an unclear national strategy. Together with the limited number of incentives the country has to offer to those that pretend to abandon the linear economic model, many entrepreneurs are hesitant to invest the capital needed to make their companies more circular. Absence of disaggregated, georeferenced, and reliable information regarding the actual availability of biological resources is also a major issue. Low literacy on bioeconomy among many of the sectoral agents and in society in general directly impacts the decisions of production and final consumption. The WinBio project seeks to outline a strategic approach for the management of weaknesses/opportunities in the technology transfer process, given the reality of the territory, through road mapping and national and international benchmarking. The developed work included the identification and analysis of agents in the interior region of Portugal, natural endogenous resources, products, and processes associated with potential development. Specific flow of biological wastes, possible value chains, and the potential for replacing critical raw materials with bio-based products was accessed, taking into consideration other countries with a matured bioeconomy. The study found food industry, agriculture, forestry, and fisheries generate huge amounts of waste streams, which in turn provide an opportunity for the establishment of local bio-industries powered by this biomass. The project identified biological resources with potential for replication and applicability in the Portuguese context. The richness of natural resources and potentials known in the interior region of Portugal is a major key to developing the Circular Economy and sustainability of the country.Keywords: circular bioeconomy, interior region of portugal, regional development., public policy
Procedia PDF Downloads 9122 The Distribution of Prevalent Supplemental Nutrition Assistance Program-Authorized Food Store Formats Differ by U.S. Region and Rurality: Implications for Food Access and Obesity Linkages
Authors: Bailey Houghtaling, Elena Serrano, Vivica Kraak, Samantha Harden, George Davis, Sarah Misyak
Abstract:
United States (U.S.) Department of Agriculture Supplemental Nutrition Assistance Program (SNAP) participants are low-income Americans receiving federal dollars for supplemental food and beverage purchases. Participants use a variety of (traditional/non-traditional) SNAP-authorized stores for household dietary purchases - also representing food access points for all Americans. Importantly consumers' food and beverage purchases from non-traditional store formats tend to be higher in saturated fats, added sugars, and sodium when compared to purchases from traditional (e.g., grocery/supermarket) formats. Overconsumption of energy-dense and low-nutrient food and beverage products contribute to high obesity rates and adverse health outcomes that differ in severity among urban/rural U.S. locations and high/low-income populations. Little is known about the SNAP-authorized food store format landscape nationally, regionally, or by urban-rural status, as traditional formats are currently used as the gold standard in food access research. This research utilized publicly available U.S. databases to fill this large literature gap and to provide insight into modes of food access for vulnerable U.S. populations: (1) SNAP Retailer Locator which provides a list of all authorized food stores in the U.S., and; (2) Rural-Urban Continuum Codes (RUCC) that categorize U.S. counties as urban (RUCC 1-3) or rural (RUCC 4-9). Frequencies were determined for the highest occurring food store formats nationally and within two regionally diverse U.S. states – Virginia in the east and California in the west. Store format codes were assigned (e.g., grocery, drug, convenience, mass merchandiser, supercenter, dollar, club, or other). RUCC was applied to investigate state-level differences in urbanity-rurality regarding prevalent food store formats and Chi Square test of independence was used to determine if food store format distributions significantly (p < 0.05) differed by region or rurality. The resulting research sample that represented highly prevalent SNAP-authorized food stores nationally included 41.25% of all SNAP stores in the U.S. (N=257,839), comprised primarily of convenience formats (31.94%) followed by dollar (25.58%), drug (19.24%), traditional (10.87%), supercenter (6.85%), mass merchandiser (1.62%), non-food store or restaurant (1.81%), and club formats (1.09%). Results also indicated that the distribution of prevalent SNAP-authorized formats significantly differed by state. California had a lower proportion of traditional (9.96%) and a higher proportion of drug (28.92%) formats than Virginia- 11.55% and 19.97%, respectively (p < 0.001). Virginia also had a higher proportion of dollar formats (26.11%) when compared to California (10.64%) (p < 0.001). Significant differences were also observed for rurality variables (p < 0.001). Prominently, rural Virginia had a significantly higher proportion of dollar formats (41.71%) when compared to urban Virginia (21.78%) and rural California (21.21%). Non-traditional SNAP-authorized formats are highly prevalent and significantly differ in distribution by U.S. region and rurality. The largest proportional difference was observed for dollar formats where the least nutritious consumer purchases are documented in the literature. Researchers/practitioners should investigate non-traditional food stores at the local level using these research findings and similar applied methodologies to determine how access to various store formats impact obesity prevalence. For example, dollar stores may be prime targets for interventions to enhance nutritious consumer purchases in rural Virginia while targeting drug formats in California may be more appropriate.Keywords: food access, food store format, nutrition interventions, SNAP consumers
Procedia PDF Downloads 13921 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 189