Search results for: total cycle time
52 Traditional Lifestyles of the 'Mbuti' Indigenous Communities and the Relationship with the Preservation of Natural Resources in the Landscape of the Okapi Wildlife Reserve in a Context of Socio-cultural Upheaval, Democratic Republic of Congo
Authors: Chales Mumbere Musavandalo, Lucie B. Mugherwa, Gloire Kayitoghera Mulondi, Naanson Bweya, Muyisa Musongora, Francis Lelo Nzuzi
Abstract:
The landscape of the Okapi Wildlife Reserve in the Democratic Republic of Congo harbors a large community of Mbuti indigenous peoples, often described as the guardians of nature. Living in and off the forest has long been a sustainable strategy for preserving natural resources. This strategy, seen as a form of eco-responsible citizenship, draws upon ethnobotanical knowledge passed down through generations. However, these indigenous communities are facing socio-cultural upheaval, which impacts their traditional way of life. This study aims to assess the relationship between the Mbuti indigenous people’s way of life and the preservation of the Okapi Wildlife Reserve. The study was conducted under the assumption that, despite socio-cultural upheavals, the forest and its resources remain central to the Mbuti way of life. The study was conducted in six encampments, three of which were located inside the forest and two in the anthropized zone. The methodological approach initially involved group interviews in six Mbuti encampments. The objective of these interviews was to determine how these people perceive the various services provided by the forest and the resources obtained from this habitat. The technique of using pebbles was adopted to adapt the exercise of weighting services and resources to the understanding of these people. Subsequently, the study carried out ethnobotanical surveys to identify the wood resources frequently used by these communities. This survey was completed in third position by a transect inventory of 1000 m length and 25 m width in order to enhance the understanding of the abundance of these resources around the camps. Two transects were installed in each camp to carry out this inventory. Traditionally, the Mbuti communities sustain their livelihood through hunting, fishing, gathering for self-consumption, and basketry. The Manniophyton fulvum-based net remains the main hunting tool. The primary forest and the swamp are two habitats from which these peoples derive the majority of their resources. However, with the arrival of the Bantu people, who introduced agriculture based on cocoa production, the Mbuti communities started providing services to the Bantu in the form of labor and field guarding. This cultural symbiosis between Mbute and Bantu has also led to non-traditional practices, such as the use of hunting rifles instead of nets and fishing nets instead of creels. The socio-economic and ecological environment in which Mbuti communities live is changing rapidly, including the resources they depend on. By incorporating the time factor into their perception of ecosystem services, only their future (p-value = 0, 0,121), the provision of wood for energy (p-value = 0,1976), and construction (p-value = 0,2548) would be closely associated with the forest in their future. For other services, such as food supply, medicine, and hunting, adaptation to Bantu customs is conceivable. Additionally, the abundance of wood used by the Mbuti people has been high around encampments located in intact forests and low in those in anthropized areas. The traditional way of life of the Mbuti communities is influenced by the cultural symbiosis, reflected in their habits and the availability of resources. The land tenure security of Mbuti areas is crucial to preserve their tradition and forest biodiversity. Conservation efforts in the Okapi Wildlife Reserve must consider this cultural dynamism and promote positive values for the flagship species. The oversight of subsistence hunting is imperative to curtail the transition of these communities to poaching.Keywords: traditional life, conservation, Indigenous people, cultural symbiosis, forest
Procedia PDF Downloads 5951 Improving Data Completeness and Timely Reporting: A Joint Collaborative Effort between Partners in Health and Ministry of Health in Remote Areas, Neno District, Malawi
Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Moses Banda Aron, Julia Higgins, Manuel Mulwafu, Kondwani Mpinga, Mwayi Chunga, Grace Momba, Enock Ndarama, Dickson Sumphi, Atupere Phiri, Fabien Munyaneza
Abstract:
Background: Data is key to supporting health service delivery as stakeholders, including NGOs rely on it for effective service delivery, decision-making, and system strengthening. Several studies generated debate on data quality from national health management information systems (HMIS) in sub-Saharan Africa. This limits the utilization of data in resource-limited settings, which already struggle to meet standards set by the World Health Organization (WHO). We aimed to evaluate data quality improvement of Neno district HMIS over a 4-year period (2018 – 2021) following quarterly data reviews introduced in January 2020 by the district health management team and Partners In Health. Methods: Exploratory Mixed Research was used to examine report rates, followed by in-depth interviews using Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs). We used the WHO module desk review to assess the quality of HMIS data in the Neno district captured from 2018 to 2021. The metrics assessed included the completeness and timeliness of 34 reports. Completeness was measured as a percentage of non-missing reports. Timeliness was measured as the span between data inputs and expected outputs meeting needs. We computed T-Test and recorded P-values, summaries, and percentage changes using R and Excel 2016. We analyzed demographics for key informant interviews in Power BI. We developed themes from 7 FGDs and 11 KIIs using Dedoose software, from which we picked perceptions of healthcare workers, interventions implemented, and improvement suggestions. The study was reviewed and approved by Malawi National Health Science Research Committee (IRB: 22/02/2866). Results: Overall, the average reporting completeness rate was 83.4% (before) and 98.1% (after), while timeliness was 68.1% and 76.4 respectively. Completeness of reports increased over time: 2018, 78.8%; 2019, 88%; 2020, 96.3% and 2021, 99.9% (p< 0.004). The trend for timeliness has been declining except in 2021, where it improved: 2018, 68.4%; 2019, 68.3%; 2020, 67.1% and 2021, 81% (p< 0.279). Comparing 2021 reporting rates to the mean of three preceding years, both completeness increased from 88% to 99% (in 2021), while timeliness increased from 68% to 81%. Sixty-five percent of reports have maintained meeting a national standard of 90%+ in completeness while only 24% in timeliness. Thirty-two percent of reports met the national standard. Only 9% improved on both completeness and timeliness, and these are; cervical cancer, nutrition care support and treatment, and youth-friendly health services reports. 50% of reports did not improve to standard in timeliness, and only one did not in completeness. On the other hand, factors associated with improvement included improved communications and reminders using internal communication, data quality assessments, checks, and reviews. Decentralizing data entry at the facility level was suggested to improve timeliness. Conclusion: Findings suggest that data quality in HMIS for the district has improved following collaborative efforts. We recommend maintaining such initiatives to identify remaining quality gaps and that results be shared publicly to support increased use of data. These results can inform Ministry of Health and its partners on some interventions and advise initiatives for improving its quality.Keywords: data quality, data utilization, HMIS, collaboration, completeness, timeliness, decision-making
Procedia PDF Downloads 8450 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence
Authors: Muhammad Bilal Shaikh
Abstract:
Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.Keywords: multimodal AI, computer vision, NLP, mineral processing, mining
Procedia PDF Downloads 6849 Revolutionizing Oil Palm Replanting: Geospatial Terrace Design for High-precision Ground Implementation Compared to Conventional Methods
Authors: Nursuhaili Najwa Masrol, Nur Hafizah Mohammed, Nur Nadhirah Rusyda Rosnan, Vijaya Subramaniam, Sim Choon Cheak
Abstract:
Replanting in oil palm cultivation is vital to enable the introduction of planting materials and provides an opportunity to improve the road, drainage, terrace design, and planting density. Oil palm replanting is fundamentally necessary every 25 years. The adoption of the digital replanting blueprint is imperative as it can assist the Malaysia Oil Palm industry in addressing challenges such as labour shortages and limited expertise related to replanting tasks. Effective replanting planning should commence at least 6 months prior to the actual replanting process. Therefore, this study will help to plan and design the replanting blueprint with high-precision translation on the ground. With the advancement of geospatial technology, it is now feasible to engage in thoroughly researched planning, which can help maximize the potential yield. A blueprint designed before replanting is to enhance management’s ability to optimize the planting program, address manpower issues, or even increase productivity. In terrace planting blueprints, geographic tools have been utilized to design the roads, drainages, terraces, and planting points based on the ARM standards. These designs are mapped with location information and undergo statistical analysis. The geospatial approach is essential in precision agriculture and ensuring an accurate translation of design to the ground by implementing high-accuracy technologies. In this study, geospatial and remote sensing technologies played a vital role. LiDAR data was employed to determine the Digital Elevation Model (DEM), enabling the precise selection of terraces, while ortho imagery was used for validation purposes. Throughout the designing process, Geographical Information System (GIS) tools were extensively utilized. To assess the design’s reliability on the ground compared with the current conventional method, high-precision GPS instruments like EOS Arrow Gold and HIPER VR GNSS were used, with both offering accuracy levels between 0.3 cm and 0.5cm. Nearest Distance Analysis was generated to compare the design with actual planting on the ground. The analysis revealed that it could not be applied to the roads due to discrepancies between actual roads and the blueprint design, which resulted in minimal variance. In contrast, the terraces closely adhered to the GPS markings, with the most variance distance being less than 0.5 meters compared to actual terraces constructed. Considering the required slope degrees for terrace planting, which must be greater than 6 degrees, the study found that approximately 65% of the terracing was constructed at a 12-degree slope, while over 50% of the terracing was constructed at slopes exceeding the minimum degrees. Utilizing blueprint replanting promising strategies for optimizing land utilization in agriculture. This approach harnesses technology and meticulous planning to yield advantages, including increased efficiency, enhanced sustainability, and cost reduction. From this study, practical implementation of this technique can lead to tangible and significant improvements in agricultural sectors. In boosting further efficiencies, future initiatives will require more sophisticated techniques and the incorporation of precision GPS devices for upcoming blueprint replanting projects besides strategic progression aims to guarantee the precision of both blueprint design stages and its subsequent implementation on the field. Looking ahead, automating digital blueprints are necessary to reduce time, workforce, and costs in commercial production.Keywords: replanting, geospatial, precision agriculture, blueprint
Procedia PDF Downloads 8348 Ensemble Sampler For Infinite-Dimensional Inverse Problems
Authors: Jeremie Coullon, Robert J. Webber
Abstract:
We introduce a Markov chain Monte Carlo (MCMC) sam-pler for infinite-dimensional inverse problems. Our sam-pler is based on the affine invariant ensemble sampler, which uses interacting walkers to adapt to the covariance structure of the target distribution. We extend this ensem-ble sampler for the first time to infinite-dimensional func-tion spaces, yielding a highly efficient gradient-free MCMC algorithm. Because our ensemble sampler does not require gradients or posterior covariance estimates, it is simple to implement and broadly applicable. In many Bayes-ian inverse problems, Markov chain Monte Carlo (MCMC) meth-ods are needed to approximate distributions on infinite-dimensional function spaces, for example, in groundwater flow, medical imaging, and traffic flow. Yet designing efficient MCMC methods for function spaces has proved challenging. Recent gradi-ent-based MCMC methods preconditioned MCMC methods, and SMC methods have improved the computational efficiency of functional random walk. However, these samplers require gradi-ents or posterior covariance estimates that may be challenging to obtain. Calculating gradients is difficult or impossible in many high-dimensional inverse problems involving a numerical integra-tor with a black-box code base. Additionally, accurately estimating posterior covariances can require a lengthy pilot run or adaptation period. These concerns raise the question: is there a functional sampler that outperforms functional random walk without requir-ing gradients or posterior covariance estimates? To address this question, we consider a gradient-free sampler that avoids explicit covariance estimation yet adapts naturally to the covariance struc-ture of the sampled distribution. This sampler works by consider-ing an ensemble of walkers and interpolating and extrapolating between walkers to make a proposal. This is called the affine in-variant ensemble sampler (AIES), which is easy to tune, easy to parallelize, and efficient at sampling spaces of moderate dimen-sionality (less than 20). The main contribution of this work is to propose a functional ensemble sampler (FES) that combines func-tional random walk and AIES. To apply this sampler, we first cal-culate the Karhunen–Loeve (KL) expansion for the Bayesian prior distribution, assumed to be Gaussian and trace-class. Then, we use AIES to sample the posterior distribution on the low-wavenumber KL components and use the functional random walk to sample the posterior distribution on the high-wavenumber KL components. Alternating between AIES and functional random walk updates, we obtain our functional ensemble sampler that is efficient and easy to use without requiring detailed knowledge of the target dis-tribution. In past work, several authors have proposed splitting the Bayesian posterior into low-wavenumber and high-wavenumber components and then applying enhanced sampling to the low-wavenumber components. Yet compared to these other samplers, FES is unique in its simplicity and broad applicability. FES does not require any derivatives, and the need for derivative-free sam-plers has previously been emphasized. FES also eliminates the requirement for posterior covariance estimates. Lastly, FES is more efficient than other gradient-free samplers in our tests. In two nu-merical examples, we apply FES to challenging inverse problems that involve estimating a functional parameter and one or more scalar parameters. We compare the performance of functional random walk, FES, and an alternative derivative-free sampler that explicitly estimates the posterior covariance matrix. We conclude that FES is the fastest available gradient-free sampler for these challenging and multimodal test problems.Keywords: Bayesian inverse problems, Markov chain Monte Carlo, infinite-dimensional inverse problems, dimensionality reduction
Procedia PDF Downloads 15447 Towards Better Integration: Qualitative Study on Perceptions of Russian-Speaking Immigrants in Australia
Authors: Oleg Shovkovyy
Abstract:
This research conducted in response to one of the most pressing questions on the agenda of many public administration offices around the world: “What could be done for better integration and assimilation of immigrants into hosting communities?” In author’s view, the answer could be suggested by immigrants themselves. They, often ‘bogged down in the past,’ snared by own idols and demons, perceive things differently, which, in turn, may result in their inability to integrate smoothly into hosting communities. Brief literature review suggests that perceptions of immigrants are completely neglected or something unsought in the current research on migrants, which, often, based on opinion polls by members of hosting communities themselves or superficial research data by various research organizations. Even those specimens that include voices of immigrants, unlikely to shed any additional light onto the problem simply because certain things are not made to speak out loud, especially to those in whose hands immigrants’ fate is (authorities). In this regard, this qualitative study, conducted by an insider to a few Russian-speaking communities, represents a unique opportunity for all stakeholders to look at the question of integration through the eyes of immigrants, from a different perspective and thus, makes research findings especially valuable for better understanding of the problem. Case study research employed ethnographic methods of gathering data where, approximately 200 Russian-speaking immigrants of first and second generations were closely observed by the Russian-speaking researcher in their usual setting, for eight months, and at different venues. The number of informal interviews with 27 key informants, with whom the researcher managed to establish a good rapport and who were keen enough to share their experiences voluntarily, were conducted. The field notes were taken at 14 locations (study sites) within the Brisbane region of Queensland, Australia. Moreover, all this time, researcher lived in dwelling of one of the immigrants and was an active participant in the social life (worship, picnics, dinners, weekend schools, concerts, cultural events, social gathering, etc.) of observed communities, whose members, to a large extent, belong to various religious lines of the Russian and Protestant Church. It was found that the majority of immigrants had experienced some discrimination in matters of hiring, employment, recognition of educational qualifications from home countries, and simply felt a sort of dislike from society in various everyday situations. Many noted complete absences or very limited state assistance in terms of employment, training, education, and housing. For instance, the Australian Government Department of Human Services not only does not stimulate job search but, on the contrary, encourages to refuse short-term works and employment. On the other hand, offered free courses on adaptation, and the English language proved to be ineffective and unpopular amongst immigrants. Many interviewees have reported overstated requirements for English proficiency and local work experience, whereas it was not critical for the given task or job. Based on the result of long-term monitoring, the researcher also had the courage to assert the negative and decelerating roles of immigrants’ communities, particularly religious communities, on processes of integration and assimilation. The findings suggest that governments should either change current immigration policies in the direction of their toughening or to take more proactive and responsible role in dealing with immigrant-related issues; for instance, increasing assistance and support to all immigrants and probably, paying more attention to and taking stake in managing and organizing lives of immigrants’ communities rather, simply leaving it all to chance.Keywords: Australia, immigration, integration, perceptions
Procedia PDF Downloads 22046 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering
Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi
Abstract:
In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering
Procedia PDF Downloads 15045 Artificial Intelligence Impact on the Australian Government Public Sector
Authors: Jessica Ho
Abstract:
AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.Keywords: artificial inteligence, machine learning, rules, governance, government
Procedia PDF Downloads 7044 Upsouth: Digitally Empowering Rangatahi (Youth) and Whaanau (Families) to Build Skills in Critical and Creative Thinking to Achieve More Active Citizenship in Aotearoa New Zealand
Authors: Ayla Hoeta
Abstract:
In a post-colonial Aotearoa New Zealand, solutions by rangatahi (youth) for rangatahi are essential as is civic participation and building economic agency in an increasingly tough economic climate. Upsouth was an online community crowdsourcing platform developed by The Southern Initiative, in collaboration with Itsnoon that provides rangatahi and whānau (family) a safe space to share lived experience, thoughts and ideas about local kaupapa (issues/topics) of importance to them. The target participants were Māori indigenous peoples and Pacifica groups, aged 14 - 21 years. In the Aotearoa New Zealand context, this participant group is not likely to engage in traditional consultation processes despite being an essential constituent in helping shape better local communities, whānau and futures. The Upsouth platform was active for two years from 2018-2019 where it completed 42 callups with 4300+ participants. The web platform collates the ideas, voices, feedback, and content of users around a callup that has been commissioned by a sponsor, such as Auckland Council, Z Energy or Auckland Transport. A callup may be about a pressing challenge in a community such as climate change, a new housing development, homelessness etc. Each callup was funded by the sponsor with Upsouths main point of difference being that participants are given koha (money donation) through digital wallets for their ideas. Depending on the quality of what participants upload, the koha varies between small micropayments and larger payments. This encouraged participants to develop creative and critical thinking - upskilling for future focussed jobs, enterprise and democratic skills while earning pocket money at the same time. Upsouth enables youth-led action and voice, and empowers them to be a part of a reciprocal and creative economy. Rangatahi are encouraged to express themselves culturally, creatively, freely and in a way they are free to choose - for example, spoken word, song, dance, video, drawings, and/or poems. This challenges and changes what is considered acceptable as community engagement feedback by the local government. Many traditional engagement platforms are not as consultative, do not accept diverse types of feedback, nor incentivise this valuable expression of feedback. Upsouth is also empowering for rangatahi, since it allows them the opportunity to express their opinions directly to the government. Upsouth gained national and international recognition for the way it engages with youth: winning the Supreme Award and the Accessibility and Transparency Award at Auckland Council’s 2018 Engagement Awards, becoming a finalist in the 2018 Digital Equity and Accessibility category of International Data Corporation’s Smart City Asia and Pacific Awards. This paper will fully contextualize the challenges of rangatahi and whānau civic engagement in Aotearoa New Zealand and then present a reflective case study of the Upsouth project, with examples from some of the callups. This is intended to form part of the Divided Cities 22 conference New Ground sub-theme as a critical reflection on a design intervention, which was conceived and implemented by the lead author to overcome the post-colonial divisions of Māori, Pacifica and minority ethnic rangatahi in Aotearoa New Zealand.Keywords: rangatahi, youth empowerment, civic engagement, enabling, relating, digital platform, participation
Procedia PDF Downloads 8143 Microfluidic Plasmonic Device for the Sensitive Dual LSPR-Thermal Detection of the Cardiac Troponin Biomarker in Laminal Flow
Authors: Andreea Campu, Ilinica Muresan, Simona Cainap, Simion Astilean, Monica Focsan
Abstract:
Acute myocardial infarction (AMI) is the most severe cardiovascular disease, which has threatened human lives for decades, thus a continuous interest is directed towards the detection of cardiac biomarkers such as cardiac troponin I (cTnI) in order to predict risk and, implicitly, fulfill the early diagnosis requirements in AMI settings. Microfluidics is a major technology involved in the development of efficient sensing devices with real-time fast responses and on-site applicability. Microfluidic devices have gathered a lot of attention recently due to their advantageous features such as high sensitivity and specificity, miniaturization and portability, ease-of-use, low-cost, facile fabrication, and reduced sample manipulation. The integration of gold nanoparticles into the structure of microfluidic sensors has led to the development of highly effective detection systems, considering the unique properties of the metallic nanostructures, specifically the Localized Surface Plasmon Resonance (LSPR), which makes them highly sensitive to their microenvironment. In this scientific context, herein, we propose the implementation of a novel detection device, which successfully combines the efficiency of gold bipyramids (AuBPs) as signal transducers and thermal generators with the sample-driven advantages of the microfluidic channels into a miniaturized, portable, low-cost, specific, and sensitive test for the dual LSPR-thermographic cTnI detection. Specifically, AuBPs with longitudinal LSPR response at 830 nm were chemically synthesized using the seed-mediated growth approach and characterized in terms of optical and morphological properties. Further, the colloidal AuBPs were deposited onto pre-treated silanized glass substrates thus, a uniform nanoparticle coverage of the substrate was obtained and confirmed by extinction measurements showing a 43 nm blue-shift of the LSPR response as a consequence of the refractive index change. The as-obtained plasmonic substrate was then integrated into a microfluidic “Y”-shaped polydimethylsiloxane (PDMS) channel, fabricated using a Laser Cutter system. Both plasmonic and microfluidic elements were plasma treated in order to achieve a permanent bond. The as-developed microfluidic plasmonic chip was further coupled to an automated syringe pump system. The proposed biosensing protocol implicates the successive injection inside the microfluidic channel as follows: p-aminothiophenol and glutaraldehyde, to achieve a covalent bond between the metallic surface and cTnI antibody, anti-cTnI, as a recognition element, and target cTnI biomarker. The successful functionalization and capture of cTnI was monitored by LSPR detection thus, after each step, a red-shift of the optical response was recorded. Furthermore, as an innovative detection technique, thermal determinations were made after each injection by exposing the microfluidic plasmonic chip to 785 nm laser excitation, considering that the AuBPs exhibit high light-to-heat conversion performances. By the analysis of the thermographic images, thermal curves were obtained, showing a decrease in the thermal efficiency after the anti-cTnI-cTnI reaction was realized. Thus, we developed a microfluidic plasmonic chip able to operate as both LSPR and thermal sensor for the detection of the cardiac troponin I biomarker, leading thus to the progress of diagnostic devices.Keywords: gold nanobipyramids, microfluidic device, localized surface plasmon resonance detection, thermographic detection
Procedia PDF Downloads 12942 Stabilizing Additively Manufactured Superalloys at High Temperatures
Authors: Keivan Davami, Michael Munther, Lloyd Hackel
Abstract:
The control of properties and material behavior by implementing thermal-mechanical processes is based on mechanical deformation and annealing according to a precise schedule that will produce a unique and stable combination of grain structure, dislocation substructure, texture, and dispersion of precipitated phases. The authors recently developed a thermal-mechanical technique to stabilize the microstructure of additively manufactured nickel-based superalloys even after exposure to high temperatures. However, the mechanism(s) that controls this stability is still under investigation. Laser peening (LP), also called laser shock peening (LSP), is a shock based (50 ns duration) post-processing technique used for extending performance levels and improving service life of critical components by developing deep levels of plastic deformation, thereby generating high density of dislocations and inducing compressive residual stresses in the surface and deep subsurface of components. These compressive residual stresses are usually accompanied with an increase in hardness and enhance the material’s resistance to surface-related failures such as creep, fatigue, contact damage, and stress corrosion cracking. While the LP process enhances the life span and durability of the material, the induced compressive residual stresses relax at high temperatures (>0.5Tm, where Tm is the absolute melting temperature), limiting the applicability of the technology. At temperatures above 0.5Tm, the compressive residual stresses relax, and yield strength begins to drop dramatically. The principal reason is the increasing rate of solid-state diffusion, which affects both the dislocations and the microstructural barriers. Dislocation configurations commonly recover by mechanisms such as climbing and recombining rapidly at high temperatures. Furthermore, precipitates coarsen, and grains grow; virtually all of the available microstructural barriers become ineffective.Our results indicate that by using “cyclic” treatments with sequential LP and annealing steps, the compressive stresses survive, and the microstructure is stable after exposure to temperatures exceeding 0.5Tm for a long period of time. When the laser peening process is combined with annealing, dislocations formed as a result of LPand precipitates formed during annealing have a complex interaction that provides further stability at high temperatures. From a scientific point of view, this research lays the groundwork for studying a variety of physical, materials science, and mechanical engineering concepts. This research could lead to metals operating at higher sustained temperatures enabling improved system efficiencies. The strengthening of metals by a variety of means (alloying, work hardening, and other processes) has been of interest for a wide range of applications. However, the mechanistic understanding of the often complex processes of interactionsbetween dislocations with solute atoms and with precipitates during plastic deformation have largely remained scattered in the literature. In this research, the elucidation of the actual mechanisms involved in the novel cyclic LP/annealing processes as a scientific pursuit is investigated through parallel studies of dislocation theory and the implementation of advanced experimental tools. The results of this research help with the validation of a novel laser processing technique for high temperature applications. This will greatly expand the applications of the laser peening technology originally devised only for temperatures lower than half of the melting temperature.Keywords: laser shock peening, mechanical properties, indentation, high temperature stability
Procedia PDF Downloads 14941 Research Project of National Interest (PRIN-PNRR) DIVAS: Developing Methods to Assess Tree Vitality after a Wildfire through Analyses of Cambium Sugar Metabolism
Authors: Claudia Cocozza, Niccolò Frassinelli, Enrico Marchi, Cristiano Foderi, Alessandro Bizzarri, Margherita Paladini, Maria Laura Traversi, Eleftherious Touloupakis, Alessio Giovannelli
Abstract:
The development of tools to quickly identify the fate of injured trees after stress is highly relevant when biodiversity restoration of damaged sites is based on nature-based solutions. In this context, an approach to assess irreversible physiological damages within trees could help to support planning management decisions of perturbed sites to restore biodiversity, for the safety of the environment and understanding functionality adjustments of the ecosystems. Tree vitality can be estimated by a series of physiological proxies like cambium activity, starch, and soluble sugars amount in C-sinks whilst the accumulation of ethanol within the cambial cells and phloem is considered an alert of cell death. However, their determination requires time-consuming laboratory protocols, which makes the approach unfeasible as a practical option in the field. The project aims to develop biosensors to assess the concentration of soluble sugars and ethanol in stem tissues. Soluble sugars and ethanol concentrations will be used to define injured trees to discriminate compromised and recovering trees in the forest directly. To reach this goal, we select study sites subjected to prescribed fires or recent wildfires as experimental set-ups. Indeed, in Mediterranean countries, forest fire is a recurrent event that must be considered as a central component of regional and global strategies in forest management and biodiversity restoration programs. A biosensor will be developed through a multistep process related to target analytes characterization, bioreceptor selection, and, finally, calibration/testing of the sensor. To validate biosensor signals, soluble sugars and ethanol will be quantified by HPLC and GC using synthetic media (in lab) and phloem sap (in field) whilst cambium vitality will be assessed by anatomical observations. On burnt trees, the stem growth will be monitored by dendrometers and/or estimated by tree ring analyses, whilst the tree response to past fire events will be assessed by isotopic discrimination. Moreover, the fire characterization and the visual assessment procedure will be used to assign burnt trees to a vitality class. At the end of the project, a well-defined procedure combining biosensor signal and visual assessment will be produced and applied to a study case. The project outcomes and the results obtained will be properly packaged to reach, engage and address the needs of the final users and widely shared with relevant stakeholders involved in the optimal use of biosensors and in the management of post-fire areas. This project was funded by National Recovery and Resilience Plan (NRRP), Mission 4, Component C2, Investment 1.1 - Call for tender No. 1409 of 14 September 2022 – ‘Progetti di Ricerca di Rilevante interesse Nazionale – PRIN’ of Italian Ministry of University and Research funded by the European Union – NextGenerationEU; Grant N° P2022Z5742, CUP B53D23023780001.Keywords: phloem, scorched crown, conifers, prescribed burning, biosensors
Procedia PDF Downloads 1640 Design and 3D-Printout of The Stack-Corrugate-Sheel Core Sandwiched Decks for The Bridging System
Authors: K. Kamal
Abstract:
Structural sandwich panels with core of Advanced Composites Laminates l Honeycombs / PU-foams are used in aerospace applications and are also fabricated for use now in some civil engineering applications. An all Advanced Composites Foot Over Bridge (FOB) system, designed and developed for pedestrian traffic is one such application earlier, may be cited as an example here. During development stage of this FoB, a profile of its decks was then spurred as a single corrugate sheet core sandwiched between two Glass Fibre Reinforced Plastics(GFRP) flat laminates. Once successfully fabricated and used, these decks did prove suitable also to form other structure on assembly, such as, erecting temporary shelters. Such corrugated sheet core profile sandwiched panels were then also tried using the construction materials but any conventional method of construction only posed certain difficulties in achieving the required core profile monolithically within the sandwiched slabs and hence it was then abended. Such monolithic construction was, however, subsequently eased out on demonstration by dispensing building materials mix through a suitably designed multi-dispenser system attached to a 3D Printer. This study conducted at lab level was thus reported earlier and it did include the fabrication of a 3D printer in-house first as ‘3DcMP’ as well as on its functional operation, some required sandwich core profiles also been 3D-printed out producing panels hardware. Once a number of these sandwich panels in single corrugated sheet core monolithically printed out, panels were subjected to load test in an experimental set up as also their structural behavior was studied analytically, and subsequently, these results were correlated as reported in the literature. In achieving the required more depths and also to exhibit further the stronger and creating sandwiched decks of better structural and mechanical behavior, further more complex core configuration such as stack corrugate sheets core with a flat mid plane was felt to be the better sandwiched core. Such profile remained as an outcome that turns out merely on stacking of two separately printed out monolithic units of single corrugated sheet core developed earlier as above and bonded them together initially, maintaining a different orientation. For any required sequential understanding of the structural behavior of any such complex profile core sandwiched decks with special emphasis to study of the effect in the variation of corrugation orientation in each distinct tire in this core, it obviously calls for an analytical study first. The rectangular,simply supported decks have therefore been considered for analysis adopting the ‘Advanced Composite Technology(ACT), some numerical results along with some fruitful findings were obtained and these are all presented here in this paper. From this numerical result, it has been observed that a mid flat layer which eventually get created monolethically itself, in addition to eliminating the bonding process in development, has been found to offer more effective bending resistance by such decks subjected to UDL over them. This is understood to have resulted here since the existence of a required shear resistance layer at the mid of the core in this profile, unlike other bending elements. As an addendum to all such efforts made as covered above and was published earlier, this unique stack corrugate sheet core profile sandwiched structural decks, monolithically construction with ease at the site itself, has been printed out from a 3D Printer. On employing 3DcMP and using some innovative building construction materials, holds the future promises of such research & development works since all those several aspects of a 3D printing in construction are now included such as reduction in the required construction time, offering cost effective solutions with freedom in design of any such complex shapes thus can widely now be realized by the modern construction industry.Keywords: advance composite technology(ACT), corrugated laminates, 3DcMP, foot over bridge (FOB), sandwiched deck units
Procedia PDF Downloads 17239 The Impact of Supporting Productive Struggle in Learning Mathematics: A Quasi-Experimental Study in High School Algebra Classes
Authors: Sumeyra Karatas, Veysel Karatas, Reyhan Safak, Gamze Bulut-Ozturk, Ozgul Kartal
Abstract:
Productive struggle entails a student's cognitive exertion to comprehend mathematical concepts and uncover solutions not immediately apparent. The significance of productive struggle in learning mathematics is accentuated by influential educational theorists, emphasizing its necessity for learning mathematics with understanding. Consequently, supporting productive struggle in learning mathematics is recognized as a high-leverage and effective mathematics teaching practice. In this study, the investigation into the role of productive struggle in learning mathematics led to the development of a comprehensive rubric for productive struggle pedagogy through an exhaustive literature review. The rubric consists of eight primary criteria and 37 sub-criteria, providing a detailed description of teacher actions and pedagogical choices that foster students' productive struggles. These criteria encompass various pedagogical aspects, including task design, tool implementation, allowing time for struggle, posing questions, scaffolding, handling mistakes, acknowledging efforts, and facilitating discussion/feedback. Utilizing this rubric, a team of researchers and teachers designed eight 90-minute lesson plans, employing a productive struggle pedagogy, for a two-week unit on solving systems of linear equations. Simultaneously, another set of eight lesson plans on the same topic, featuring identical content and problems but employing a traditional lecture-and-practice model, was designed by the same team. The objective was to assess the impact of supporting productive struggle on students' mathematics learning, defined by the strands of mathematical proficiency. This quasi-experimental study compares the control group, which received traditional lecture- and practice instruction, with the treatment group, which experienced a productive struggle in pedagogy. Sixty-six 10th and 11th-grade students from two algebra classes, taught by the same teacher at a high school, underwent either the productive struggle pedagogy or lecture-and-practice approach over two-week eight 90-minute class sessions. To measure students' learning, an assessment was created and validated by a team of researchers and teachers. It comprised seven open-response problems assessing the strands of mathematical proficiency: procedural and conceptual understanding, strategic competence, and adaptive reasoning on the topic. The test was administered at the beginning and end of the two weeks as pre-and post-test. Students' solutions underwent scoring using an established rubric, subjected to expert validation and an inter-rater reliability process involving multiple criteria for each problem based on their steps and procedures. An analysis of covariance (ANCOVA) was conducted to examine the differences between the control group, which received traditional pedagogy, and the treatment group, exposed to the productive struggle pedagogy, on the post-test scores while controlling for the pre-test. The results indicated a significant effect of treatment on post-test scores for procedural understanding (F(2, 63) = 10.47, p < .001), strategic competence (F(2, 63) = 9.92, p < .001), adaptive reasoning (F(2, 63) = 10.69, p < .001), and conceptual understanding (F(2, 63) = 10.06, p < .001), controlling for pre-test scores. This demonstrates the positive impact of supporting productive struggle in learning mathematics. In conclusion, the results revealed the significance of the role of productive struggle in learning mathematics. The study further explored the practical application of productive struggle through the development of a comprehensive rubric describing the pedagogy of supporting productive struggle.Keywords: effective mathematics teaching practice, high school algebra, learning mathematics, productive struggle
Procedia PDF Downloads 5238 Parallel Opportunity for Water Conservation and Habitat Formation on Regulated Streams through Formation of Thermal Stratification in River Pools
Authors: Todd H. Buxton, Yong G. Lai
Abstract:
Temperature management in regulated rivers can involve significant expenditures of water to meet the cold-water requirements of species in summer. For this purpose, flows released from Lewiston Dam on the Trinity River in Northern California are 12.7 cms with temperatures around 11oC in July through September to provide adult spring Chinook cold water to hold in deep pools and mature until spawning in fall. The releases are more than double the flow and 10oC colder temperatures than the natural conditions before the dam was built. The high, cold releases provide springers the habitat they require but may suppress the stream food base and limit future populations of salmon by reducing the juvenile fish size and survival to adults via the positive relationship between the two. Field and modeling research was undertaken to explore whether lowering summer releases from Lewiston Dam may promote thermal stratification in river pools so that both the cold-water needs of adult salmon and warmer water requirements of other organisms in the stream biome may be met. For this investigation, a three-dimensional (3D) computational fluid dynamics (CFD) model was developed and validated with field measurements in two deep pools on the Trinity River. Modeling and field observations were then used to identify the flows and temperatures that may form and maintain thermal stratification under different meteorologic conditions. Under low flows, a pool was found to be well mixed and thermally homogenous until temperatures began to stratify shortly after sunrise. Stratification then strengthened through the day until shading from trees and mountains cooled the inlet flow and decayed the thermal gradient, which collapsed shortly before sunset and returned the pool to a well-mixed state. This diurnal process of stratification formation and destruction was closely predicted by the 3D CFD model. Both the model and field observations indicate that thermal stratification maintained the coldest temperatures of the day at ≥2m depth in a pool and provided water that was around 8oC warmer in the upper 2m of the pool. Results further indicate that the stratified pool under low flows provided almost the same daily average temperatures as when flows were an order of magnitude higher and stratification was prevented, indicating significant water savings may be realized in regulated streams while also providing a diversity in water temperatures the ecosystem requires. With confidence in the 3D CFD model, the model is now being applied to a dozen pools in the Trinity River to understand how pool bathymetry influences thermal stratification under variable flows and diurnal temperature variations. This knowledge will be used to expand the results to 52 pools in a 64 km reach below Lewiston Dam that meet the depth criteria (≥2 m) for spring Chinook holding. From this, rating curves will be developed to relate discharge to the volume of pool habitat that provides springers the temperature (<15.6oC daily average), velocity (0.15 to 0.4 m/s) and depths that accommodate the escapement target for spring Chinook (6,000 adults) under maximum fish densities measured in other streams (3.1 m3/fish) during the holding time of year (May through August). Flow releases that meet these goals will be evaluated for water savings relative to the current flow regime and their influence on indicator species, including the Foothill Yellow-Legged Frog, and aspects of the stream biome that support salmon populations, including macroinvertebrate production and juvenile Chinook growth rates.Keywords: 3D CFD modeling, flow regulation, thermal stratification, chinook salmon, foothill yellow-legged frogs, water managment
Procedia PDF Downloads 6437 [Keynote Talk]: Bioactive Cyclic Dipeptides of Microbial Origin in Discovery of Cytokine Inhibitors
Authors: Sajeli A. Begum, Ameer Basha, Kirti Hira, Rukaiyya Khan
Abstract:
Cyclic dipeptides are simple diketopiperazine derivatives being investigated by several scientists for their biological effects which include anticancer, antimicrobial, haematological, anticonvulsant, immunomodulatory effect, etc. They are potentially active microbial metabolites having been synthesized too, for developing into drug candidates. Cultures of Pseudomonas species have earlier been reported to produce cyclic dipeptides, helping in quorum sensing signals and bacterial–host colonization phenomena during infections, causing cell anti-proliferation and immunosuppression. Fluorescing Pseudomonas species have been identified to secrete lipid derivatives, peptides, pyrroles, phenazines, indoles, aminoacids, pterines, pseudomonic acids and some antibiotics. In the present work, results of investigation on the cyclic dipeptide metabolites secreted by the culture broth of Pseudomonas species as potent pro-inflammatory cytokine inhibitors are discussed. The bacterial strain was isolated from the rhizospheric soil of groundnut crop and identified as Pseudomonas aeruginosa by 16S rDNA sequence (GenBank Accession No. KT625586). Culture broth of this strain was prepared by inoculating into King’s B broth and incubating at 30 ºC for 7 days. The ethyl acetate extract of culture broth was prepared and lyophilized to get a dry residue (EEPA). Lipopolysaccharide (LPS)-induced ELISA assay proved the inhibition of tumor necrosis factor-alpha (TNF-α) secretion in culture supernatant of RAW 264.7 cells by EEPA (IC50 38.8 μg/mL). The effect of oral administration of EEPA on plasma TNF-α level in rats was tested by ELISA kit. The LPS mediated plasma TNF-α level was reduced to 45% with 125 mg/kg dose of EEPA. Isolation of the chemical constituents of EEPA through column chromatography yielded ten cyclic dipeptides, which were characterized using nuclear magnetic resonance and mass spectroscopic techniques. These cyclic dipeptides are biosynthesized in microorganisms by multifunctional assembly of non-ribosomal peptide synthases and cyclic dipeptide synthase. Cyclo (Gly-L-Pro) was found to be more potentially (IC50 value 4.5 μg/mL) inhibiting TNF-α production followed by cyclo (trans-4-hydroxy-L-Pro-L-Phe) (IC50 value 14.2 μg/mL) and the effect was equal to that of standard immunosuppressant drug, prednisolone. Further, the effect was analyzed by determining mRNA expression of TNF-α in LPS-stimulated RAW 264.7 macrophages using quantitative real-time reverse transcription polymerase chain reaction. EEPA and isolated cyclic dipeptides demonstrated diminution of TNF-α mRNA expression levels in a dose-dependent manner under the tested conditions. Also, they were found to control the expression of other pro-inflammatory cytokines like IL-1β and IL-6, when tested through their mRNA expression levels in LPS-stimulated RAW 264.7 macrophages under LPS-stimulated conditions. In addition, significant inhibition effect was found on Nitric oxide production. Further all the compounds exhibited weak toxicity to LPS-induced RAW 264.7 cells. Thus the outcome of the study disclosed the effectiveness of EEPA and the isolated cyclic dipeptides in down-regulating key cytokines involved in pathophysiology of autoimmune diseases.In another study led by the investigators, microbial cyclic dipeptides were found to exhibit excellent antimicrobial effect against Fusarium moniliforme which is an important causative agent of Sorghum grain mold disease. Thus, cyclic dipeptides are emerging small molecular drug candidates for various autoimmune diseases.Keywords: cyclic dipeptides, cytokines, Fusarium moniliforme, Pseudomonas, TNF-alpha
Procedia PDF Downloads 21236 Sampling and Chemical Characterization of Particulate Matter in a Platinum Mine
Authors: Juergen Orasche, Vesta Kohlmeier, George C. Dragan, Gert Jakobi, Patricia Forbes, Ralf Zimmermann
Abstract:
Underground mining poses a difficult environment for both man and machines. At more than 1000 meters underneath the surface of the earth, ores and other mineral resources are still gained by conventional and motorised mining. Adding to the hazards caused by blasting and stone-chipping, the working conditions are best described by the high temperatures of 35-40°C and high humidity, at low air exchange rates. Separate ventilation shafts lead fresh air into a mine and others lead expended air back to the surface. This is essential for humans and machines working deep underground. Nevertheless, mines are widely ramified. Thus the air flow rate at the far end of a tunnel is sensed to be close to zero. In recent years, conventional mining was supplemented by mining with heavy diesel machines. These very flat machines called Load Haul Dump (LHD) vehicles accelerate and ease work in areas favourable for heavy machines. On the other hand, they emit non-filtered diesel exhaust, which constitutes an occupational hazard for the miners. Combined with a low air exchange, high humidity and inorganic dust from the mining it leads to 'black smog' underneath the earth. This work focuses on the air quality in mines employing LHDs. Therefore we performed personal sampling (samplers worn by miners during their work), stationary sampling and aethalometer (Microaeth MA200, Aethlabs) measurements in a platinum mine in around 1000 meters under the earth’s surface. We compared areas of high diesel exhaust emission with areas of conventional mining where no diesel machines were operated. For a better assessment of health risks caused by air pollution we applied a separated gas-/particle-sampling tool (or system), with first denuder section collecting intermediate VOCs. These multi-channel silicone rubber denuders are able to trap IVOCs while allowing particles ranged from 10 nm to 1 µm in diameter to be transmitted with an efficiency of nearly 100%. The second section is represented by a quartz fibre filter collecting particles and adsorbed semi-volatile organic compounds (SVOC). The third part is a graphitized carbon black adsorber – collecting the SVOCs that evaporate from the filter. The compounds collected on these three sections were analyzed in our labs with different thermal desorption techniques coupled with gas chromatography and mass spectrometry (GC-MS). VOCs and IVOCs were measured with a Shimadzu Thermal Desorption Unit (TD20, Shimadzu, Japan) coupled to a GCMS-System QP 2010 Ultra with a quadrupole mass spectrometer (Shimadzu). The GC was equipped with a 30m, BP-20 wax column (0.25mm ID, 0.25µm film) from SGE (Australia). Filters were analyzed with In-situ derivatization thermal desorption gas chromatography time-of-flight-mass spectrometry (IDTD-GC-TOF-MS). The IDTD unit is a modified GL sciences Optic 3 system (GL Sciences, Netherlands). The results showed black carbon concentrations measured with the portable aethalometers up to several mg per m³. The organic chemistry was dominated by very high concentrations of alkanes. Typical diesel engine exhaust markers like alkylated polycyclic aromatic hydrocarbons were detected as well as typical lubrication oil markers like hopanes.Keywords: diesel emission, personal sampling, aethalometer, mining
Procedia PDF Downloads 15735 A Risk-Based Comprehensive Framework for the Assessment of the Security of Multi-Modal Transport Systems
Authors: Mireille Elhajj, Washington Ochieng, Deeph Chana
Abstract:
The challenges of the rapid growth in the demand for transport has traditionally been seen within the context of the problems of congestion, air quality, climate change, safety, and affordability. However, there are increasing threats including those related to crime such as cyber-attacks that threaten the security of the transport of people and goods. To the best of the authors’ knowledge, this paper presents for the first time, a comprehensive framework for the assessment of the current and future security issues of multi-modal transport systems. The approach or method proposed is based on a structured framework starting with a detailed specification of the transport asset map (transport system architecture), followed by the identification of vulnerabilities. The asset map and vulnerabilities are used to identify the various approaches for exploitation of the vulnerabilities, leading to the creation of a set of threat scenarios. The threat scenarios are then transformed into risks and their categories, and include insights for their mitigation. The consideration of the mitigation space is holistic and includes the formulation of appropriate policies and tactics and/or technical interventions. The quality of the framework is ensured through a structured and logical process that identifies the stakeholders, reviews the relevant documents including policies and identifies gaps, incorporates targeted surveys to augment the reviews, and uses subject matter experts for validation. The approach to categorising security risks is an extension of the current methods that are typically employed. Specifically, the partitioning of risks into either physical or cyber categories is too limited for developing mitigation policies and tactics/interventions for transport systems where an interplay between physical and cyber processes is very often the norm. This interplay is rapidly taking on increasing significance for security as the emergence of cyber-physical technologies, are shaping the future of all transport modes. Examples include: Connected Autonomous Vehicles (CAVs) in road transport; the European Rail Traffic Management System (ERTMS) in rail transport; Automatic Identification System (AIS) in maritime transport; advanced Communications, Navigation and Surveillance (CNS) technologies in air transport; and the Internet of Things (IoT). The framework adopts a risk categorisation scheme that considers risks as falling within the following threat→impact relationships: Physical→Physical, Cyber→Cyber, Cyber→Physical, and Physical→Cyber). Thus the framework enables a more complete risk picture to be developed for today’s transport systems and, more importantly, is readily extendable to account for emerging trends in the sector that will define future transport systems. The framework facilitates the audit and retro-fitting of mitigations in current transport operations and the analysis of security management options for the next generation of Transport enabling strategic aspirations such as systems with security-by-design and co-design of safety and security to be achieved. An initial application of the framework to transport systems has shown that intra-modal consideration of security measures is sub-optimal and that a holistic and multi-modal approach that also addresses the intersections/transition points of such networks is required as their vulnerability is high. This is in-line with traveler-centric transport service provision, widely accepted as the future of mobility services. In summary, a risk-based framework is proposed for use by the stakeholders to comprehensively and holistically assess the security of transport systems. It requires a detailed understanding of the transport architecture to enable a detailed vulnerabilities analysis to be undertaken, creates threat scenarios and transforms them into risks which form the basis for the formulation of interventions.Keywords: mitigations, risk, transport, security, vulnerabilities
Procedia PDF Downloads 16534 Risks for Cyanobacteria Harmful Algal Blooms in Georgia Piedmont Waterbodies Due to Land Management and Climate Interactions
Authors: Sam Weber, Deepak Mishra, Susan Wilde, Elizabeth Kramer
Abstract:
The frequency and severity of cyanobacteria harmful blooms (CyanoHABs) have been increasing over time, with point and non-point source eutrophication and shifting climate paradigms being blamed as the primary culprits. Excessive nutrients, warm temperatures, quiescent water, and heavy and less regular rainfall create more conducive environments for CyanoHABs. CyanoHABs have the potential to produce a spectrum of toxins that cause gastrointestinal stress, organ failure, and even death in humans and animals. To promote enhanced, proactive CyanoHAB management, risk modeling using geospatial tools can act as predictive mechanisms to supplement current CyanoHAB monitoring, management and mitigation efforts. The risk maps would empower water managers to focus their efforts on high risk water bodies in an attempt to prevent CyanoHABs before they occur, and/or more diligently observe those waterbodies. For this research, exploratory spatial data analysis techniques were used to identify the strongest predicators for CyanoHAB blooms based on remote sensing-derived cyanobacteria cell density values for 771 waterbodies in the Georgia Piedmont and landscape characteristics of their watersheds. In-situ datasets for cyanobacteria cell density, nutrients, temperature, and rainfall patterns are not widely available, so free gridded geospatial datasets were used as proxy variables for assessing CyanoHAB risk. For example, the percent of a watershed that is agriculture was used as a proxy for nutrient loading, and the summer precipitation within a watershed was used as a proxy for water quiescence. Cyanobacteria cell density values were calculated using atmospherically corrected images from the European Space Agency’s Sentinel-2A satellite and multispectral instrument sensor at a 10-meter ground resolution. Seventeen explanatory variables were calculated for each watershed utilizing the multi-petabyte geospatial catalogs available within the Google Earth Engine cloud computing interface. The seventeen variables were then used in a multiple linear regression model, and the strongest predictors of cyanobacteria cell density were selected for the final regression model. The seventeen explanatory variables included land cover composition, winter and summer temperature and precipitation data, topographic derivatives, vegetation index anomalies, and soil characteristics. Watershed maximum summer temperature, percent agriculture, percent forest, percent impervious, and waterbody area emerged as the strongest predictors of cyanobacteria cell density with an adjusted R-squared value of 0.31 and a p-value ~ 0. The final regression equation was used to make a normalized cyanobacteria cell density index, and a Jenks Natural Break classification was used to assign waterbodies designations of low, medium, or high risk. Of the 771 waterbodies, 24.38% were low risk, 37.35% were medium risk, and 38.26% were high risk. This study showed that there are significant relationships between free geospatial datasets representing summer maximum temperatures, nutrient loading associated with land use and land cover, and the area of a waterbody with cyanobacteria cell density. This data analytics approach to CyanoHAB risk assessment corroborated the literature-established environmental triggers for CyanoHABs, and presents a novel approach for CyanoHAB risk mapping in waterbodies across the greater southeastern United States.Keywords: cyanobacteria, land use/land cover, remote sensing, risk mapping
Procedia PDF Downloads 21133 Optical Coherence Tomography in Differentiation of Acute and Non-Healing Wounds
Authors: Ananya Barui, Provas Banerjee, Jyotirmoy Chatterjee
Abstract:
Application of optical technology in medicine and biology has a long track-record. In this endeavor, OCT is able to attract both engineers and biologists to work together in the field of photonics for establishing a striking non-invasive imaging technology. In contrast to other in vivo imaging modalities like Raman imaging, confocal imaging, two-photon microscopy etc. which can perform in vivo imaging upto 100-200 micron depth due to limitation in numerical aperture or scattering, however, OCT can achieve high-resolution imaging upto few millimeters of tissue structures depending on their refractive index in different anatomical location. This tomographic system depends on interference of two light waves in an interferometer to produce a depth profile of specimen. In wound healing, frequent collection of biopsies for follow-up of repair process could be avoided by such imaging technique. Real time skin OCT (the optical biopsy) has efficacy in deeper and faster illumination of cutaneou tissue to acquire high resolution cross sectional images of their internal micro-structure. Swept Source-OCT (SS-OCT), a novel imaging technique, can generate high-speed depth profile (~ 2 mm) of wound at a sweeping rate of laser with micron level resolution and optimum coherent length of 5-6 mm. Normally multi-layered skin tissue depicts different optical properties along with variation in thickness, refractive index and composition (i.e. keratine layer, water, fat etc.) according to their anatomical location. For instance, stratum corneum, the upper-most and relatively dehydrated layer of epidermis reflects more light and produces more lucid and a sharp demarcation line with rest of the hydrated epidermal region. During wound healing or regeneration, optical properties of cutaneous tissue continuously altered with maturation of wound bed. More mature and less hydrated tissue component reflects more light and becomes visible as a brighter area in comparison to immature region which content higher amount water or fat that depicts as a darker area in OCT image. Non-healing wound possess prolonged inflammation and inhibits nascent proliferative stage. Accumulation of necrotic tissues also prevents the repair of non-healing wounds. Due to high resolution and potentiality to reflect the compositional aspects of tissues in terms of their optical properties, this tomographic method may facilitate in differentiating non-healing and acute wounds in addition to clinical observations. Non-invasive OCT offers better insight regarding specific biological status of tissue in health and pathological conditions, OCT images could be associated with histo-pathological ‘gold standard’. This correlated SS-OCT and microscopic evaluation of the wound edges can provide information regarding progressive healing and maturation of the epithelial components. In the context of searching analogy between two different imaging modalities, their relative performances in imaging of healing bed were estimated for probing an alternative approach. Present study validated utility of SS-OCT in revealing micro-anatomic structure in the healing bed with newer information. Exploring precise correspondence of OCT images features with histo-chemical findings related to epithelial integrity of the regenerated tissue could have great implication. It could establish the ‘optical biopsy’ as a potent non-invasive diagnostic tool for cutaneous pathology.Keywords: histo-pathology, non invasive imaging, OCT, wound healing
Procedia PDF Downloads 27932 The Impact of Right to Repair Initiatives on Environmental and Financial Performance in European Consumer Electronics Firms: An Econometric Analysis
Authors: Daniel Stabler, Anne-Laure Mention, Henri Hakala, Ahmad Alaassar
Abstract:
In Europe, 2.2 billion tons of waste annually generate severe environmental damage and economic burdens, and negatively impact human health. A stark illustration of the problem is found within the consumer electronics industry, which reflects one of the most complex global waste streams. Of the 5.3 billion globally discarded mobile phones in 2022, only 17% were properly recycled. To address these pressing issues, Europe has made significant strides in developing waste management strategies, Circular Economy initiatives, and Right to Repair policies. These endeavors aim to make product repair and maintenance more accessible, extend product lifespans, reduce waste, and promote sustainable resource use. European countries have introduced Right to Repair policies, often in conjunction with extended producer responsibility legislation, repair subsidies, and consumer repair indices, to varying degrees of regulatory rigor. Changing societal trends emphasizing sustainability and environmental responsibility have driven consumer demand for more sustainable and repairable products, benefiting repair-focused consumer electronics businesses. In academic research, much of the literature in Management studies has examined the European Circular Economy and the Right to Repair from firm-level perspectives. These studies frequently employ a business-model lens, emphasizing innovation and strategy frameworks. However, this study takes an institutional perspective, aiming to understand the adoption of Circular Economy and repair-focused business models within the European consumer electronics market. The concepts of the Circular Economy and the Right to Repair align with institutionalism as they reflect evolving societal norms favoring sustainability and consumer empowerment. Regulatory institutions play a pivotal role in shaping and enforcing these concepts through legislation, influencing the behavior of businesses and individuals. Compliance and enforcement mechanisms are essential for their success, compelling actors to adopt sustainable practices and consider product life extension. Over time, these mechanisms create a path for more sustainable choices, underscoring the influence of institutions and societal values on behavior and decision-making. Institutionalism, particularly 'neo-institutionalism,' provides valuable insights into the factors driving the adoption of Circular and repair-focused business models. Neo-institutional pressures can manifest through coercive regulatory initiatives or normative standards shaped by socio-cultural trends. The Right to Repair movement has emerged as a prominent and influential idea within academic discourse and sustainable development initiatives. Therefore, understanding how macro-level societal shifts toward the Circular Economy and the Right to Repair trigger firm-level responses is imperative. This study aims to answer a crucial question about the impact of European Right to Repair initiatives had on the financial and environmental performance of European consumer electronics companies at the firm level. A quantitative and statistical research design will be employed. The study will encompass an extensive sample of consumer electronics firms in Northern and Western Europe, analyzing their financial and environmental performance in relation to the implementation of Right to Repair mechanisms. The study's findings are expected to provide valuable insights into the broader implications of the Right to Repair and Circular Economy initiatives on the European consumer electronics industry.Keywords: circular economy, right to repair, institutionalism, environmental management, european union
Procedia PDF Downloads 8231 Harnessing the Power of Artificial Intelligence: Advancements and Ethical Considerations in Psychological and Behavioral Sciences
Authors: Nayer Mofidtabatabaei
Abstract:
Advancements in artificial intelligence (AI) have transformed various fields, including psychology and behavioral sciences. This paper explores the diverse ways in which AI is applied to enhance research, diagnosis, therapy, and understanding of human behavior and mental health. We discuss the potential benefits and challenges associated with AI in these fields, emphasizing the ethical considerations and the need for collaboration between AI researchers and psychological and behavioral science experts. Artificial Intelligence (AI) has gained prominence in recent years, revolutionizing multiple industries, including healthcare, finance, and entertainment. One area where AI holds significant promise is the field of psychology and behavioral sciences. AI applications in this domain range from improving the accuracy of diagnosis and treatment to understanding complex human behavior patterns. This paper aims to provide an overview of the various AI applications in psychological and behavioral sciences, highlighting their potential impact, challenges, and ethical considerations. Mental Health Diagnosis AI-driven tools, such as natural language processing and sentiment analysis, can analyze large datasets of text and speech to detect signs of mental health issues. For example, chatbots and virtual therapists can provide initial assessments and support to individuals suffering from anxiety or depression. Autism Spectrum Disorder (ASD) Diagnosis AI algorithms can assist in early ASD diagnosis by analyzing video and audio recordings of children's behavior. These tools help identify subtle behavioral markers, enabling earlier intervention and treatment. Personalized Therapy AI-based therapy platforms use personalized algorithms to adapt therapeutic interventions based on an individual's progress and needs. These platforms can provide continuous support and resources for patients, making therapy more accessible and effective. Virtual Reality Therapy Virtual reality (VR) combined with AI can create immersive therapeutic environments for treating phobias, PTSD, and social anxiety. AI algorithms can adapt VR scenarios in real-time to suit the patient's progress and comfort level. Data Analysis AI aids researchers in processing vast amounts of data, including survey responses, brain imaging, and genetic information. Privacy Concerns Collecting and analyzing personal data for AI applications in psychology and behavioral sciences raise significant privacy concerns. Researchers must ensure the ethical use and protection of sensitive information. Bias and Fairness AI algorithms can inherit biases present in training data, potentially leading to biased assessments or recommendations. Efforts to mitigate bias and ensure fairness in AI applications are crucial. Transparency and Accountability AI-driven decisions in psychology and behavioral sciences should be transparent and subject to accountability. Patients and practitioners should understand how AI algorithms operate and make decisions. AI applications in psychological and behavioral sciences have the potential to transform the field by enhancing diagnosis, therapy, and research. However, these advancements come with ethical challenges that require careful consideration. Collaboration between AI researchers and psychological and behavioral science experts is essential to harness AI's full potential while upholding ethical standards and privacy protections. The future of AI in psychology and behavioral sciences holds great promise, but it must be navigated with caution and responsibility.Keywords: artificial intelligence, psychological sciences, behavioral sciences, diagnosis and therapy, ethical considerations
Procedia PDF Downloads 7030 Catastrophic Health Expenditures: Evaluating the Effectiveness of Nepal's National Health Insurance Program Using Propensity Score Matching and Doubly Robust Methodology
Authors: Simrin Kafle, Ulrika Enemark
Abstract:
Catastrophic health expenditure (CHE) is a critical issue in low- and middle-income countries like Nepal, exacerbating financial hardship among vulnerable households. This study assesses the effectiveness of Nepal’s National Health Insurance Program (NHIP), launched in 2015, to reduce out-of-pocket (OOP) healthcare costs and mitigate CHE. Conducted in Pokhara Metropolitan City, the study used an analytical cross-sectional design, sampling 1276 households through a two-stage random sampling method. Data was collected via face-to-face interviews between May and October 2023. The analysis was conducted using SPSS version 29, incorporating propensity score matching to minimize biases and create comparable groups of enrolled and non-enrolled households in the NHIP. PSM helped reduce confounding effects by matching households with similar baseline characteristics. Additionally, a doubly robust methodology was employed, combining propensity score adjustment with regression modeling to enhance the reliability of the results. This comprehensive approach ensured a more accurate estimation of the impact of NHIP enrollment on CHE. Among the 1276 samples, 534 households (41.8%) were enrolled in NHIP. Of them, 84.3% of households renewed their insurance card, though some cited long waiting times, lack of medications, and complex procedures as barriers to renewal. Approximately 57.3% of households reported known diseases before enrollment, with 49.8% attending routine health check-ups in the past year. The primary motivation for enrollment was encouragement from insurance employees (50.2%). The data indicates that 12.5% of enrolled households experienced CHE versus 7.5% among non-enrolled. Enrollment into NHIP does not contribute to lower CHE (AOR: 1.98, 95% CI: 1.21-3.24). Key factors associated with increased CHE risk were presence of non-communicable diseases (NCDs) (AOR: 3.94, 95% CI: 2.10-7.39), acute illnesses/injuries (AOR: 6.70, 95% CI: 3.97-11.30), larger household size (AOR: 3.09, 95% CI: 1.81-5.28), and households below the poverty line (AOR: 5.82, 95% CI: 3.05-11.09). Other factors such as gender, education level, caste/ethnicity, presence of elderly members, and under-five children also showed varying associations with CHE, though not all were statistically significant. The study concludes that enrollment in the NHIP does not significantly reduce the risk of CHE. The reason for this could be inadequate coverage, where high-cost medicines, treatments, and transportation costs are not fully included in the insurance package, leading to significant out-of-pocket expenses. We also considered the long waiting time, lack of medicines, and complex procedures for the utilization of NHIP benefits, which might result in the underuse of covered services. Finally, gaps in enrollment and retention might leave certain households vulnerable to CHE despite the existence of NHIP. Key factors contributing to increased CHE include NCDs, acute illnesses, larger household sizes, and poverty. To improve the program’s effectiveness, it is recommended that NHIP benefits and coverage be expanded to better protect against high healthcare costs. Additionally, simplifying the renewal process, addressing long waiting times, and enhancing the availability of services could improve member satisfaction and retention. Targeted financial protection measures should be implemented for high-risk groups, and efforts should be made to increase awareness and encourage routine health check-ups to prevent severe health issues that contribute to CHE.Keywords: catastrophic health expenditure, effectiveness, national health insurance program, Nepal
Procedia PDF Downloads 2529 “Divorced Women are Like Second-Hand Clothes” - Hate Language in Media Discourse (Using the Example of Electronic Media Platforms)
Authors: Sopio Totibadze
Abstract:
Although the legal framework of Georgia reflects the main principles of gender equality and is in line with the international situation (UNDP, 2018), Georgia remains a male-dominated society. This means that men prevail in many areas of social, economic, and political life, which frequently gives women a subordinate status in society and the family (UN women). According to the latest study, “violence against women and girls in Georgia is also recognized as a public problem, and it is necessary to focus on it” (UN women). Moreover, the Public Defender's report on the protection of human rights in Georgia (2019) reveals that “in the last five years, 151 women were killed in Georgia due to gender and family violence”. Sadly, these statistics have increased significantly since that time. The issue was acutely reflected in the document published by the Organization for Security and Cooperation in Europe, “Gender Hate Crime” (March 10, 2021). “Unfortunately, the rates of femicide ..... are still high in the country, and distrust of law enforcement agencies often makes such cases invisible, which requires special attention from the state.” More precisely, the cited document considers that there are frequent cases of crimes based on gender-based oppression in Georgia, which pose a threat not only to women but also to people of any gender whose desires and aspirations do not correspond to the gender norms and roles prevailing in society. According to the study, this type of crime has a “significant and lasting impact on the victim(s) and also undermines the safety and cohesion of society and gender equality”. It is well-known that language is often used as a tool for gender oppression (Rusieshvili-Cartledge and Dolidze, 2021; Totibadze, 2021). Therefore, feminist and gender studies in linguistics ultimately serve to represent the problem, reflect on it, and propose ways to solve it. Together with technical advancement in communication, a new form of discrimination has arisen- hate language against women in electronic media discourse. Due to the nature of social media and the internet, messages containing hate language can spread in seconds and reach millions of people. However, only a few know about the detrimental effects they may have on the addressee and society. This paper aims to analyse the hateful comments directed at women on various media platforms to determine (1) the linguistic strategies used while attacking women and (2) the reasons why women may fall victim to this type of hate language. The data have been collected over six months, and overall, 500 comments will be examined for the paper. Qualitative and quantitative analysis was chosen for the methodology of the study. The comments posted on various media platforms, including social media posts, articles, or pictures, have been selected manually due to several reasons, the most important being the problem of identifying hate speech as it can disguise itself in different ways- humour, memes, etc. The comments on the articles, posts, pictures, and videos selected for sociolinguistic analysis depict a woman, a taboo topic, or a scandalous event centred on a woman that triggered a lot of hatred and hate language towards the person to whom the post/article was dedicated. The study has revealed that a woman can become a victim of hatred directed at them if they do something considered to be a deviation from a societal norm, namely, get a divorce, be sexually active, be vocal about feministic values, and talk about taboos. Interestingly, people who utilize hate language are not only men trying to “normalize” the prejudiced patriarchal values but also women who are equally active in bringing down a "strong" woman. The paper also aims to raise awareness about the hate language directed at women, as being knowledgeable about the issue at hand is the first step to tackling it.Keywords: femicide, hate language, media discourse, sociolinguistics
Procedia PDF Downloads 8328 Measuring the Biomechanical Effects of Worker Skill Level and Joystick Crane Speed on Forestry Harvesting Performance Using a Simulator
Authors: Victoria L. Chester, Usha Kuruganti
Abstract:
The forest industry is a major economic sector of Canada and also one of the most dangerous industries for workers. The use of mechanized mobile forestry harvesting machines has successfully reduced the incidence of injuries in forest workers related to manual labor. However, these machines have also created additional concerns, including a high machine operation learning curve, increased the length of the workday, repetitive strain injury, cognitive load, physical and mental fatigue, and increased postural loads due to sitting in a confined space. It is critical to obtain objective performance data for employers to develop appropriate work practices for this industry, however ergonomic field studies of this industry are lacking mainly due to the difficulties in obtaining comprehensive data while operators are cutting trees in the woods. The purpose of this study was to establish a measurement and experimental protocol to examine the effects of worker skill level and movement training speed (joystick crane speed) on harvesting performance using a forestry simulator. A custom wrist angle measurement device was developed as part of the study to monitor Euler angles during operation of the simulator. The device of the system consisted of two accelerometers, a Bluetooth module, three 3V coin cells, a microcontroller, a voltage regulator and an application software. Harvesting performance and crane data was provided by the simulator software and included tree to frame collisions, crane to tree collisions, boom tip distance, number of trees cut, etc. A pilot study of 3 operators with various skill levels was tested to identify factors that distinguish highly skilled operators from novice or intermediate operators. Dependent variables such as reaction time, math skill, past work experience, training movement speed (e.g. joystick control speeds), harvesting experience level, muscle activity, and wrist biomechanics were measured and analyzed. A 10-channel wireless surface EMG system was used to monitor the amplitude and mean frequency of 10 upper extremity muscles during pre and postperformance on the forestry harvest stimulator. The results of the pilot study showed inconsistent changes in median frequency pre-and postoperation, but there was the increase in the activity of the flexor carpi radialis, anterior deltoid and upper trapezius of both arms. The wrist sensor results indicated that wrist supination and pronation occurred more than flexion and extension with radial-ulnar rotation demonstrating the least movement. Overall, wrist angular motion increased as the crane speed increased from slow to fast. Further data collection is needed and will help industry partners determine those factors that separate skill levels of operators, identify optimal training speeds, and determine the length of training required to bring new operators to an efficient skill level effectively. In addition to effective and employment training programs, results of this work will be used for selective employee recruitment strategies to improve employee retention after training. Further, improved training procedures and knowledge of the physical and mental demands on workers will lead to highly trained and efficient personnel, reduced risk of injury, and optimal work protocols.Keywords: EMG, forestry, human factors, wrist biomechanics
Procedia PDF Downloads 14627 Integrating Experiential Real-World Learning in Undergraduate Degrees: Maximizing Benefits and Overcoming Challenges
Authors: Anne E. Goodenough
Abstract:
One of the most important roles of higher education professionals is to ensure that graduates have excellent employment prospects. This means providing students with the skills necessary to be immediately effective in the workplace. Increasingly, universities are seeking to achieve this by moving from lecture-based and campus-delivered curricula to more varied delivery, which takes students out of their academic comfort zone and allows them to engage with, and be challenged by, real world issues. One popular approach is integration of problem-based learning (PBL) projects into curricula. However, although the potential benefits of PBL are considerable, it can be difficult to devise projects that are meaningful, such that they can be regarded as mere ‘hoop jumping’ exercises. This study examines three-way partnerships between academics, students, and external link organizations. It studied the experiences of all partners involved in different collaborative projects to identify how benefits can be maximized and challenges overcome. Focal collaborations included: (1) development of real-world modules with novel assessment whereby the organization became the ‘client’ for student consultancy work; (2) frameworks where students collected/analyzed data for link organizations in research methods modules; (3) placement-based internships and dissertations; (4) immersive fieldwork projects in novel locations; and (5) students working as partners on staff-led research with link organizations. Focus groups, questionnaires and semi-structured interviews were used to identify opportunities and barriers, while quantitative analysis of students’ grades was used to determine academic effectiveness. Common challenges identified by academics were finding suitable link organizations and devising projects that simultaneously provided education opportunities and tangible benefits. There was no ‘one size fits all’ formula for success, but careful planning and ensuring clarity of roles/responsibilities were vital. Students were very positive about collaboration projects. They identified benefits to confidence, time-keeping and communication, as well as conveying their enthusiasm when their work was of benefit to the wider community. They frequently highlighted employability opportunities that collaborative projects opened up and analysis of grades demonstrated the potential for such projects to increase attainment. Organizations generally recognized the value of project outputs, but often required considerable assistance to put the right scaffolding in place to ensure projects worked. Benefits were maximized by ensuring projects were well-designed, innovative, and challenging. Co-publication of projects in peer-reviewed journals sometimes gave additional benefits for all involved, being especially beneficial for student curriculum vitae. PBL and student projects are by no means new pedagogic approaches: the novelty here came from creating meaningful three-way partnerships between academics, students, and link organizations at all undergraduate levels. Such collaborations can allow students to make a genuine contribution to knowledge, answer real questions, solve actual problems, all while providing tangible benefits to organizations. Because projects are actually needed, students tend to engage with learning at a deep level. This enhances student experience, increases attainment, encourages development of subject-specific and transferable skills, and promotes networking opportunities. Such projects frequently rely upon students and staff working collaboratively, thereby also acting to break down the traditional teacher/learner division that is typically unhelpful in developing students as advanced learners.Keywords: higher education, employability, link organizations, innovative teaching and learning methods, interactions between enterprise and education, student experience
Procedia PDF Downloads 18326 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet
Authors: Justin Woulfe
Abstract:
Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics
Procedia PDF Downloads 16025 Identification Strategies for Unknown Victims from Mass Disasters and Unknown Perpetrators from Violent Crime or Terrorist Attacks
Authors: Michael Josef Schwerer
Abstract:
Background: The identification of unknown victims from mass disasters, violent crimes, or terrorist attacks is frequently facilitated through information from missing persons lists, portrait photos, old or recent pictures showing unique characteristics of a person such as scars or tattoos, or simply reference samples from blood relatives for DNA analysis. In contrast, the identification or at least the characterization of an unknown perpetrator from criminal or terrorist actions remains challenging, particularly in the absence of material or data for comparison, such as fingerprints, which had been previously stored in criminal records. In scenarios that result in high levels of destruction of the perpetrator’s corpse, for instance, blast or fire events, the chance for a positive identification using standard techniques is further impaired. Objectives: This study shows the forensic genetic procedures in the Legal Medicine Service of the German Air Force for the identification of unknown individuals, including such cases in which reference samples are not available. Scenarios requiring such efforts predominantly involve aircraft crash investigations, which are routinely carried out by the German Air Force Centre of Aerospace Medicine as one of the Institution’s essential missions. Further, casework by military police or military intelligence is supported based on administrative cooperation. In the talk, data from study projects, as well as examples from real casework, will be demonstrated and discussed with the audience. Methods: Forensic genetic identification in our laboratories involves the analysis of Short Tandem Repeats and Single Nucleotide Polymorphisms in nuclear DNA along with mitochondrial DNA haplotyping. Extended DNA analysis involves phenotypic markers for skin, hair, and eye color together with the investigation of a person’s biogeographic ancestry. Assessment of the biological age of an individual employs CpG-island methylation analysis using bisulfite-converted DNA. Forensic Investigative Genealogy assessment allows the detection of an unknown person’s blood relatives in reference databases. Technically, end-point-PCR, real-time PCR, capillary electrophoresis, pyrosequencing as well as next generation sequencing using flow-cell-based and chip-based systems are used. Results and Discussion: Optimization of DNA extraction from various sources, including difficult matrixes like formalin-fixed, paraffin-embedded tissues, degraded specimens from decomposed bodies or from decedents exposed to blast or fire events, provides soil for successful PCR amplification and subsequent genetic profiling. For cases with extremely low yields of extracted DNA, whole genome preamplification protocols are successfully used, particularly regarding genetic phenotyping. Improved primer design for CpG-methylation analysis, together with validated sampling strategies for the analyzed substrates from, e.g., lymphocyte-rich organs, allows successful biological age estimation even in bodies with highly degraded tissue material. Conclusions: Successful identification of unknown individuals or at least their phenotypic characterization using pigmentation markers together with age-informative methylation profiles, possibly supplemented by family tree search employing Forensic Investigative Genealogy, can be provided in specialized laboratories. However, standard laboratory procedures must be adapted to work with difficult and highly degraded sample materials.Keywords: identification, forensic genetics, phenotypic markers, CPG methylation, biological age estimation, forensic investigative genealogy
Procedia PDF Downloads 5124 Renewable Energy Utilization for Future Sustainability: An Approach to Roof-Mounted Photovoltaic Array Systems and Domestic Rooftop Rainwater Harvesting System Implementation in a Himachal Pradesh, India
Authors: Rajkumar Ghosh, Ananya Mukhopadhyay
Abstract:
This scientific paper presents a thorough investigation into the integration of roof-mounted photovoltaic (PV) array systems and home rooftop rainwater collection systems in a remote community in Himachal Pradesh, India, with the goal of optimum utilization of natural resources for attaining sustainable living conditions by 2030. The study looks into the technical feasibility, environmental benefits, and socioeconomic impacts of this integrated method, emphasizing its ability to handle energy and water concerns in remote rural regions. This comprehensive method not only provides a sustainable source of electricity but also ensures a steady supply of clean water, promoting resilience and improving the quality of life for the village's residents. This research highlights the potential of such integrated systems in supporting sustainable conditions in rural areas through a combination of technical feasibility studies, economic analysis, and community interaction. There would be 20690 villages and 1.48 million homes (23.79% annual growth rate) in Himachal Pradesh if all residential buildings in the state had roof-mounted photovoltaic arrays to capture solar energy for power generation. The energy produced is utilized to power homes, lessening dependency on traditional fossil fuels. The same residential buildings housed domestic rooftop rainwater collection systems. Rainwater runoff from rooftops is collected and stored in tanks for use in a number of residential purposes, such as drinking, cooking, and irrigation. The gathered rainfall enhances the region's limited groundwater resources, easing the strain on local wells and aquifers. Although Himachal Pradesh of India is a Power state, the PV arrays have reduced the reliance of village on grid power and diesel generators by providing a steady source of electricity. Rooftop rainwater gathering has not only increased residential water supply but it has also lessened the burden on local groundwater resources. This helps to replenish groundwater and offers a more sustainable water supply for the town. The neighbourhood has saved money by utilizing renewable energy and rainwater gathering. Furthermore, lower fossil fuel consumption reduces greenhouse gas emissions, which helps to mitigate the effects of climate change. The integrated strategy of installing grid connected rooftop photovoltaic arrays and home rooftop rainwater collecting systems in Himachal Pradesh rural community demonstrates a feasible model for sustainable development. According to “Swaran Jayanti Energy Policy of Himachal Pradesh”, Himachal Pradesh is planned 10 GW from rooftop mode from Solar Power. Government of India provides 40% subsidy on solar panel of 1-3 kw and subsidy of Rs 6,000 per kw per year to encourage domestic consumers of Himachal Pradesh. This effort solves energy and water concerns, improves economic well-being, and helps to conserve the environment. Such integrated systems can serve as a model for sustainable development in rural areas not only in Himachal Pradesh, but also in other parts of the world where resource scarcity is a major concern. Long-term performance and scalability of such integrated systems should be the focus of future study. Efforts should also be made to duplicate this approach in other rural areas and examine its socioeconomic and environmental implications over time.Keywords: renewable energy, photovoltaic arrays, rainwater harvesting, sustainability, rural development, Himachal Pradesh, India
Procedia PDF Downloads 10023 Synthesis of Chitosan/Silver Nanocomposites: Antibacterial Properties and Tissue Regeneration for Thermal Burn Injury
Authors: B.L. España-Sánchez, E. Luna-Hernández, R.A. Mauricio-Sánchez, M.E. Cruz-Soto, F. Padilla-Vaca, R. Muñoz, L. Granados-López, L.R. Ovalle-Flores, J.L. Menchaca-Arredondo, G. Luna-Bárcenas
Abstract:
Treatment of burn injured has been considered an important clinical problem due to the fluid control and the presence of microorganisms during the healing process. Conventional treatment includes antiseptic techniques, topical medication and surgical removal of damaged skin, to avoid bacterial growth. In order to accelerate this process, different alternatives for tissue regeneration have been explored, including artificial skin, polymers, hydrogels and hybrid materials. Some requirements consider a nonreactive organic polymer with high biocompatibility and skin adherence, avoiding bacterial infections. Chitin-derivative biopolymer such as chitosan (CS) has been used in skin regeneration following third-degree burns. The biological interest of CS is associated with the improvement of tissue cell stimulation, biocompatibility and antibacterial properties. In particular, antimicrobial properties of CS can be significantly increased when is blended with nanostructured materials. Silver-based nanocomposites have gained attention in medicine due to their high antibacterial properties against pathogens, related to their high surface area/volume ratio at nanomolar concentrations. Silver nanocomposites can be blended or synthesized with chitin-derivative biopolymers in order to obtain a biodegradable/antimicrobial hybrid with improved physic-mechanical properties. In this study, nanocomposites based on chitosan/silver nanoparticles (CS/nAg) were synthesized by the in situ chemical reduction method, improving their antibacterial properties against pathogenic bacteria and enhancing the healing process in thermal burn injuries produced in an animal model. CS/nAg was prepared in solution by the chemical reduction method, using AgNO₃ as precursor. CS was dissolved in acetic acid and mixed with different molar concentrations of AgNO₃: 0.01, 0.025, 0.05 and 0.1 M. Solutions were stirred at 95°C during 20 hours, in order to promote the nAg formation. CS/nAg solutions were placed in Petri dishes and dried, to obtain films. Structural analyses confirm the synthesis of silver nanoparticles (nAg) by means of UV-Vis and TEM, with an average size of 7.5 nm and spherical morphology. FTIR analyses showed the complex formation by the interaction of hydroxyl and amine groups with metallic nanoparticles, and surface chemical analysis (XPS) shows low concentration of Ag⁰/Ag⁺ species. Topography surface analyses by means of AFM shown that hydrated CS form a mesh with an average diameter of 10 µm. Antibacterial activity against S. aureus and P. aeruginosa was improved in all evaluated conditions, such as nAg loading and interaction time. CS/nAg nanocomposites films did not show Ag⁰/Ag⁺ release in saline buffer and rat serum after exposition during 7 days. Healing process was significantly enhanced by the presence of CS/nAg nanocomposites, inducing the production of myofibloblasts, collagen remodelation, blood vessels neoformation and epidermis regeneration after 7 days of injury treatment, by means of histological and immunohistochemistry assays. The present work suggests that hydrated CS/nAg nanocomposites can be formed a mesh, improving the bacterial penetration and the contact with embedded nAg, producing complete growth inhibition after 1.5 hours. Furthermore, CS/nAg nanocomposites improve the cell tissue regeneration in thermal burn injuries induced in rats. Synthesis of antibacterial, non-toxic, and biocompatible nanocomposites can be an important issue in tissue engineering and health care applications.Keywords: antibacterial, chitosan, healing process, nanocomposites, silver
Procedia PDF Downloads 287