Search results for: infinite feature selection
64 Contactless Heart Rate Measurement System based on FMCW Radar and LSTM for Automotive Applications
Authors: Asma Omri, Iheb Sifaoui, Sofiane Sayahi, Hichem Besbes
Abstract:
Future vehicle systems demand advanced capabilities, notably in-cabin life detection and driver monitoring systems, with a particular emphasis on drowsiness detection. To meet these requirements, several techniques employ artificial intelligence methods based on real-time vital sign measurements. In parallel, Frequency-Modulated Continuous-Wave (FMCW) radar technology has garnered considerable attention in the domains of healthcare and biomedical engineering for non-invasive vital sign monitoring. FMCW radar offers a multitude of advantages, including its non-intrusive nature, continuous monitoring capacity, and its ability to penetrate through clothing. In this paper, we propose a system utilizing the AWR6843AOP radar from Texas Instruments (TI) to extract precise vital sign information. The radar allows us to estimate Ballistocardiogram (BCG) signals, which capture the mechanical movements of the body, particularly the ballistic forces generated by heartbeats and respiration. These signals are rich sources of information about the cardiac cycle, rendering them suitable for heart rate estimation. The process begins with real-time subject positioning, followed by clutter removal, computation of Doppler phase differences, and the use of various filtering methods to accurately capture subtle physiological movements. To address the challenges associated with FMCW radar-based vital sign monitoring, including motion artifacts due to subjects' movement or radar micro-vibrations, Long Short-Term Memory (LSTM) networks are implemented. LSTM's adaptability to different heart rate patterns and ability to handle real-time data make it suitable for continuous monitoring applications. Several crucial steps were taken, including feature extraction (involving amplitude, time intervals, and signal morphology), sequence modeling, heart rate estimation through the analysis of detected cardiac cycles and their temporal relationships, and performance evaluation using metrics such as Root Mean Square Error (RMSE) and correlation with reference heart rate measurements. For dataset construction and LSTM training, a comprehensive data collection system was established, integrating the AWR6843AOP radar, a Heart Rate Belt, and a smart watch for ground truth measurements. Rigorous synchronization of these devices ensured data accuracy. Twenty participants engaged in various scenarios, encompassing indoor and real-world conditions within a moving vehicle equipped with the radar system. Static and dynamic subject’s conditions were considered. The heart rate estimation through LSTM outperforms traditional signal processing techniques that rely on filtering, Fast Fourier Transform (FFT), and thresholding. It delivers an average accuracy of approximately 91% with an RMSE of 1.01 beat per minute (bpm). In conclusion, this paper underscores the promising potential of FMCW radar technology integrated with artificial intelligence algorithms in the context of automotive applications. This innovation not only enhances road safety but also paves the way for its integration into the automotive ecosystem to improve driver well-being and overall vehicular safety.Keywords: ballistocardiogram, FMCW Radar, vital sign monitoring, LSTM
Procedia PDF Downloads 7263 The Regulation of the Cancer Epigenetic Landscape Lies in the Realm of the Long Non-coding RNAs
Authors: Ricardo Alberto Chiong Zevallos, Eduardo Moraes Rego Reis
Abstract:
Pancreatic adenocarcinoma (PDAC) patients have a less than 10% 5-year survival rate. PDAC has no defined diagnostic and prognostic biomarkers. Gemcitabine is the first-line drug in PDAC and several other cancers. Long non-coding RNAs (lncRNAs) contribute to the tumorigenesis and are potential biomarkers for PDAC. Although lncRNAs aren’t translated into proteins, they have important functions. LncRNAs can decoy or recruit proteins from the epigenetic machinery, act as microRNA sponges, participate in protein translocation through different cellular compartments, and even promote chemoresistance. The chromatin remodeling enzyme EZH2 is a histone methyltransferase that catalyzes the methylation of histone 3 at lysine 27, silencing local expression. EZH2 is ambivalent, it can also activate gene expression independently of its histone methyltransferase activity. EZH2 is overexpressed in several cancers and interacts with lncRNAs, being recruited to a specific locus. EZH2 can be recruited to activate an oncogene or silence a tumor suppressor. The lncRNAs misregulation in cancer can result in the differential recruitment of EZH2 and in a distinct epigenetic landscape, promoting chemoresistance. The relevance of the EZH2-lncRNAs interaction to chemoresistant PDAC was assessed by Real Time quantitative PCR (RT-qPCR) and RNA Immunoprecipitation (RIP) experiments with naïve and gemcitabine-resistant PDAC cells. The expression of several lncRNAs and EZH2 gene targets was evaluated contrasting naïve and resistant cells. Selection of candidate genes was made by bioinformatic analysis and literature curation. Indeed, the resistant cell line showed higher expression of chemoresistant-associated lncRNAs and protein coding genes. RIP detected lncRNAs interacting with EZH2 with varying intensity levels in the cell lines. During RIP, the nuclear fraction of the cells was incubated with an antibody for EZH2 and with magnetic beads. The RNA precipitated with the beads-antibody-EZH2 complex was isolated and reverse transcribed. The presence of candidate lncRNAs was detected by RT-qPCR, and the enrichment was calculated relative to INPUT (total lysate control sample collected before RIP). The enrichment levels varied across the several lncRNAs and cell lines. The EZH2-lncRNA interaction might be responsible for the regulation of chemoresistance-associated genes in multiple cancers. The relevance of the lncRNA-EZH2 interaction to PDAC was assessed by siRNA knockdown of a lncRNA, followed by the analysis of the EZH2 target expression by RT-qPCR. The chromatin immunoprecipitation (ChIP) of EZH2 and H3K27me3 followed by RT-qPCR with primers for EZH2 targets also assess the specificity of the EZH2 recruitment by the lncRNA. This is the first report of the interaction of EZH2 and lncRNAs HOTTIP and PVT1 in chemoresistant PDAC. HOTTIP and PVT1 were described as promoting chemoresistance in several cancers, but the role of EZH2 is not clarified. For the first time, the lncRNA LINC01133 was detected in a chemoresistant cancer. The interaction of EZH2 with LINC02577, LINC00920, LINC00941, and LINC01559 have never been reported in any context. The novel lncRNAs-EZH2 interactions regulate chemoresistant-associated genes in PDAC and might be relevant to other cancers. Therapies targeting EZH2 alone weren’t successful, and a combinatorial approach also targeting the lncRNAs interacting with it might be key to overcome chemoresistance in several cancers.Keywords: epigenetics, chemoresistance, long non-coding RNAs, pancreatic cancer, histone modification
Procedia PDF Downloads 9662 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 27661 Effectiveness of an Intervention to Increase Physics Students' STEM Self-Efficacy: Results of a Quasi-Experimental Study
Authors: Stephanie J. Sedberry, William J. Gerace, Ian D. Beatty, Michael J. Kane
Abstract:
Increasing the number of US university students who attain degrees in STEM and enter the STEM workforce is a national priority. Demographic groups vary in their rates of participation in STEM, and the US produces just 10% of the world’s science and engineering degrees (2014 figures). To address these gaps, we have developed and tested a practical, 30-minute, single-session classroom-based intervention to improve students’ self-efficacy and academic performance in University STEM courses. Self-efficacy is a psychosocial construct that strongly correlates with academic success. Self-efficacy is a construct that is internal and relates to the social, emotional, and psychological aspects of student motivation and performance. A compelling body of research demonstrates that university students’ self-efficacy beliefs are strongly related to their selection of STEM as a major, aspirations for STEM-related careers, and persistence in science. The development of an intervention to increase students’ self-efficacy is motivated by research showing that short, social-psychological interventions in education can lead to large gains in student achievement. Our intervention addresses STEM self-efficacy via two strong, but previously separate, lines of research into attitudinal/affect variables that influence student success. The first is ‘attributional retraining,’ in which students learn to attribute their successes and failures to internal rather than external factors. The second is ‘mindset’ about fixed vs. growable intelligence, in which students learn that the brain remains plastic throughout life and that they can, with conscious effort and attention to thinking skills and strategies, become smarter. Extant interventions for both of these constructs have significantly increased academic performance in the classroom. We developed a 34-item questionnaire (Likert scale) to measure STEM Self-efficacy, Perceived Academic Control, and Growth Mindset in a University STEM context, and validated it with exploratory factor analysis, Rasch analysis, and multi-trait multi-method comparison to coded interviews. Four iterations of our 42-week research protocol were conducted across two academic years (2017-2018) at three different Universities in North Carolina, USA (UNC-G, NC A&T SU, and NCSU) with varied student demographics. We utilized a quasi-experimental prospective multiple-group time series research design with both experimental and control groups, and we are employing linear modeling to estimate the impact of the intervention on Self-Efficacy,wth-Mindset, Perceived Academic Control, and final course grades (performance measure). Preliminary results indicate statistically significant effects of treatment vs. control on Self-Efficacy, Growth-Mindset, Perceived Academic Control. Analyses are ongoing and final results pending. This intervention may have the potential to increase student success in the STEM classroom—and ownership of that success—to continue in a STEM career. Additionally, we have learned a great deal about the complex components and dynamics of self-efficacy, their link to performance, and the ways they can be impacted to improve students’ academic performance.Keywords: academic performance, affect variables, growth mindset, intervention, perceived academic control, psycho-social variables, self-efficacy, STEM, university classrooms
Procedia PDF Downloads 12760 Predicting Career Adaptability and Optimism among University Students in Turkey: The Role of Personal Growth Initiative and Socio-Demographic Variables
Authors: Yagmur Soylu, Emir Ozeren, Erol Esen, Digdem M. Siyez, Ozlem Belkis, Ezgi Burc, Gülce Demirgurz
Abstract:
The aim of the study is to determine the predictive power of personal growth initiative, socio-demographic variables (such as sex, grade, and working condition) on career adaptability and optimism of bachelor students in Dokuz Eylul University in Turkey. According to career construction theory, career adaptability is viewed as a psychosocial construct, which refers to an individual’s resources for dealing with current and expected tasks, transitions and traumas in their occupational roles. Career optimism is defined as positive results for future career development of individuals in the expectation that it will achieve or to put the emphasis on the positive aspects of the event and feel comfortable about the career planning process. Personal Growth Initiative (PGI) is defined as being proactive about one’s personal development. Additionally, personal growth is defined as the active and intentional engagement in the process of personal. A study conducted on college students revealed that individuals with high self-development orientation make more effort to discover the requirements of the profession and workspaces than individuals with low levels of personal development orientation. University life is a period that social relations and the importance of academic activities are increased, the students make efforts to progress through their career paths and it is also an environment that offers opportunities to students for their self-realization. For these reasons, personal growth initiative is potentially an important variable which has a key role for an individual during the transition phase from university to the working life. Based on the review of the literature, it is expected that individual’s personal growth initiative, sex, grade, and working condition would significantly predict one’s career adaptability. In the relevant literature, it can be seen that there are relatively few studies available on the career adaptability and optimism of university students. Most of the existing studies have been carried out with limited respondents. In this study, the authors aim to conduct a comprehensive research with a large representative sample of bachelor students in Dokuz Eylul University, Izmir, Turkey. By now, personal growth initiative and career development constructs have been predominantly discussed in western contexts where individualistic tendencies are likely to be seen. Thus, the examination of the same relationship within the context of Turkey where collectivistic cultural characteristics can be more observed is expected to offer valuable insights and provide an important contribution to the literature. The participants in this study were comprised of 1500 undergraduate students being included from thirteen faculties in Dokuz Eylul University. Stratified and random sampling methods were adopted for the selection of the participants. The Personal Growth Initiative Scale-II and Career Futures Inventory were used as the major measurement tools. In data analysis stage, several statistical analysis concerning the regression analysis, one-way ANOVA and t-test will be conducted to reveal the relationships of the constructs under investigation. At the end of this project, we will be able to determine the level of career adaptability and optimism of university students at varying degrees so that a fertile ground is likely to be created to carry out several intervention techniques to make a contribution to an emergence of a healthier and more productive youth generation in psycho-social sense.Keywords: career optimism, career adaptability, personal growth initiative, university students
Procedia PDF Downloads 42159 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 7258 Advancements in Arthroscopic Surgery Techniques for Anterior Cruciate Ligament (ACL) Reconstruction
Authors: Islam Sherif, Ahmed Ashour, Ahmed Hassan, Hatem Osman
Abstract:
Anterior Cruciate Ligament (ACL) injuries are common among athletes and individuals participating in sports with sudden stops, pivots, and changes in direction. Arthroscopic surgery is the gold standard for ACL reconstruction, aiming to restore knee stability and function. Recent years have witnessed significant advancements in arthroscopic surgery techniques, graft materials, and technological innovations, revolutionizing the field of ACL reconstruction. This presentation delves into the latest advancements in arthroscopic surgery techniques for ACL reconstruction and their potential impact on patient outcomes. Traditionally, autografts from the patellar tendon, hamstring tendon, or quadriceps tendon have been commonly used for ACL reconstruction. However, recent studies have explored the use of allografts, synthetic scaffolds, and tissue-engineered grafts as viable alternatives. This abstract evaluates the benefits and potential drawbacks of each graft type, considering factors such as graft incorporation, strength, and risk of graft failure. Moreover, the application of augmented reality (AR) and virtual reality (VR) technologies in surgical planning and intraoperative navigation has gained traction. AR and VR platforms provide surgeons with detailed 3D anatomical reconstructions of the knee joint, enhancing preoperative visualization and aiding in graft tunnel placement during surgery. We discuss the integration of AR and VR in arthroscopic ACL reconstruction procedures, evaluating their accuracy, cost-effectiveness, and overall impact on surgical outcomes. Beyond graft selection and surgical navigation, patient-specific planning has gained attention in recent research. Advanced imaging techniques, such as MRI-based personalized planning, enable surgeons to tailor ACL reconstruction procedures to each patient's unique anatomy. By accounting for individual variations in the femoral and tibial insertion sites, this personalized approach aims to optimize graft placement and potentially improve postoperative knee kinematics and stability. Furthermore, rehabilitation and postoperative care play a crucial role in the success of ACL reconstruction. This abstract explores novel rehabilitation protocols, emphasizing early mobilization, neuromuscular training, and accelerated recovery strategies. Integrating technology, such as wearable sensors and mobile applications, into postoperative care can facilitate remote monitoring and timely intervention, contributing to enhanced rehabilitation outcomes. In conclusion, this presentation provides an overview of the cutting-edge advancements in arthroscopic surgery techniques for ACL reconstruction. By embracing innovative graft materials, augmented reality, patient-specific planning, and technology-driven rehabilitation, orthopedic surgeons and sports medicine specialists can achieve superior outcomes in ACL injury management. These developments hold great promise for improving the functional outcomes and long-term success rates of ACL reconstruction, benefitting athletes and patients alike.Keywords: arthroscopic surgery, ACL, autograft, allograft, graft materials, ACL reconstruction, synthetic scaffolds, tissue-engineered graft, virtual reality, augmented reality, surgical planning, intra-operative navigation
Procedia PDF Downloads 9257 Differential Expression Profile Analysis of DNA Repair Genes in Mycobacterium Leprae by qPCR
Authors: Mukul Sharma, Madhusmita Das, Sundeep Chaitanya Vedithi
Abstract:
Leprosy is a chronic human disease caused by Mycobacterium leprae, that cannot be cultured in vitro. Though treatable with multidrug therapy (MDT), recently, bacteria reported resistance to multiple antibiotics. Targeting DNA replication and repair pathways can serve as the foundation of developing new anti-leprosy drugs. Due to the absence of an axenic culture medium for the propagation of M. leprae, studying cellular processes, especially those belonging to DNA repair pathways, is challenging. Genomic understanding of M. Leprae harbors several protein-coding genes with no previously assigned function known as 'hypothetical proteins'. Here, we report identification and expression of known and hypothetical DNA repair genes from a human skin biopsy and mouse footpads that are involved in base excision repair, direct reversal repair, and SOS response. Initially, a bioinformatics approach was employed based on sequence similarity, identification of known protein domains to screen the hypothetical proteins in the genome of M. leprae, that are potentially related to DNA repair mechanisms. Before testing on clinical samples, pure stocks of bacterial reference DNA of M. leprae (NHDP63 strain) was used to construct standard graphs to validate and identify lower detection limit in the qPCR experiments. Primers were designed to amplify the respective transcripts, and PCR products of the predicted size were obtained. Later, excisional skin biopsies of newly diagnosed untreated, treated, and drug resistance leprosy cases from SIHR & LC hospital, Vellore, India were taken for the extraction of RNA. To determine the presence of the predicted transcripts, cDNA was generated from M. leprae mRNA isolated from clinically confirmed leprosy skin biopsy specimen across all the study groups. Melting curve analysis was performed to determine the integrity of the amplification and to rule out primer‑dimer formation. The Ct values obtained from qPCR were fitted to standard curve to determine transcript copy number. Same procedure was applied for M. leprae extracted after processing a footpad of nude mice of drug sensitive and drug resistant strains. 16S rRNA was used as positive control. Of all the 16 genes involved in BER, DR, and SOS, differential expression pattern of the genes was observed in terms of Ct values when compared to human samples; this was because of the different host and its immune response. However, no drastic variation in gene expression levels was observed in human samples except the nth gene. The higher expression of nth gene could be because of the mutations that may be associated with sequence diversity and drug resistance which suggests an important role in the repair mechanism and remains to be explored. In both human and mouse samples, SOS system – lexA and RecA, and BER genes AlkB and Ogt were expressing efficiently to deal with possible DNA damage. Together, the results of the present study suggest that DNA repair genes are constitutively expressed and may provide a reference for molecular diagnosis, therapeutic target selection, determination of treatment and prognostic judgment in M. leprae pathogenesis.Keywords: DNA repair, human biopsy, hypothetical proteins, mouse footpads, Mycobacterium leprae, qPCR
Procedia PDF Downloads 10356 An Analysis of Economical Drivers and Technical Challenges for Large-Scale Biohydrogen Deployment
Authors: Rouzbeh Jafari, Joe Nava
Abstract:
This study includes learnings from an engineering practice normally performed on large scale biohydrogen processes. If properly scale-up is done, biohydrogen can be a reliable pathway for biowaste valorization. Most of the studies on biohydrogen process development have used model feedstock to investigate process key performance indicators (KPIs). This study does not intend to compare different technologies with model feedstock. However, it reports economic drivers and technical challenges which help in developing a road map for expanding biohydrogen economy deployment in Canada. BBA is a consulting firm responsible for the design of hydrogen production projects. Through executing these projects, activity has been performed to identify, register and mitigate technical drawbacks of large-scale hydrogen production. Those learnings, in this study, have been applied to the biohydrogen process. Through data collected by a comprehensive literature review, a base case has been considered as a reference, and several case studies have been performed. Critical parameters of the process were identified and through common engineering practice (process design, simulation, cost estimate, and life cycle assessment) impact of these parameters on the commercialization risk matrix and class 5 cost estimations were reported. The process considered in this study is food waste and woody biomass dark fermentation. To propose a reliable road map to develop a sustainable biohydrogen production process impact of critical parameters was studied on the end-to-end process. These parameters were 1) feedstock composition, 2) feedstock pre-treatment, 3) unit operation selection, and 4) multi-product concept. A couple of emerging technologies also were assessed such as photo-fermentation, integrated dark fermentation, and using ultrasound and microwave to break-down feedstock`s complex matrix and increase overall hydrogen yield. To properly report the impact of each parameter KPIs were identified as 1) Hydrogen yield, 2) energy consumption, 3) secondary waste generated, 4) CO2 footprint, 5) Product profile, 6) $/kg-H2 and 5) environmental impact. The feedstock is the main parameter defining the economic viability of biohydrogen production. Through parametric studies, it was found that biohydrogen production favors feedstock with higher carbohydrates. The feedstock composition was varied, by increasing one critical element (such as carbohydrate) and monitoring KPIs evolution. Different cases were studied with diverse feedstock, such as energy crops, wastewater slug, and lignocellulosic waste. The base case process was applied to have reference KPIs values and modifications such as pretreatment and feedstock mix-and-match were implemented to investigate KPIs changes. The complexity of the feedstock is the main bottleneck in the successful commercial deployment of the biohydrogen process as a reliable pathway for waste valorization. Hydrogen yield, reaction kinetics, and performance of key unit operations highly impacted as feedstock composition fluctuates during the lifetime of the process or from one case to another. In this case, concept of multi-product becomes more reliable. In this concept, the process is not designed to produce only one target product such as biohydrogen but will have two or multiple products (biohydrogen and biomethane or biochemicals). This new approach is being investigated by the BBA team and the results will be shared in another scientific contribution.Keywords: biohydrogen, process scale-up, economic evaluation, commercialization uncertainties, hydrogen economy
Procedia PDF Downloads 11055 An Integrated Water Resources Management Approach to Evaluate Effects of Transportation Projects in Urbanized Territories
Authors: Berna Çalışkan
Abstract:
The integrated water management is a colloborative approach to planning that brings together institutions that influence all elements of the water cycle, waterways, watershed characteristics, wetlands, ponds, lakes, floodplain areas, stream channel structure. It encourages collaboration where it will be beneficial and links between water planning and other planning processes that contribute to improving sustainable urban development and liveability. Hydraulic considerations can influence the selection of a highway corridor and the alternate routes within the corridor. widening a roadway, replacing a culvert, or repairing a bridge. Because of this, the type and amount of data needed for planning studies can vary widely depending on such elements as environmental considerations, class of the proposed highway, state of land use development, and individual site conditions. The extraction of drainage networks provide helpful preliminary drainage data from the digital elevation model (DEM). A case study was carried out using the Arc Hydro extension within ArcGIS in the study area. It provides the means for processing and presenting spatially-referenced Stream Model. Study area’s flow routing, stream levels, segmentation, drainage point processing can be obtained using DEM as the 'Input surface raster'. These processes integrate the fields of hydrologic, engineering research, and environmental modeling in a multi-disciplinary program designed to provide decision makers with a science-based understanding, and innovative tools for, the development of interdisciplinary and multi-level approach. This research helps to manage transport project planning and construction phases to analyze the surficial water flow, high-level streams, wetland sites for development of transportation infrastructure planning, implementing, maintenance, monitoring and long-term evaluations to better face the challenges and solutions associated with effective management and enhancement to deal with Low, Medium, High levels of impact. Transport projects are frequently perceived as critical to the ‘success’ of major urban, metropolitan, regional and/or national development because of their potential to affect significant socio-economic and territorial change. In this context, sustaining and development of economic and social activities depend on having sufficient Water Resources Management. The results of our research provides a workflow to build a stream network how can classify suitability map according to stream levels. Transportation projects establish, develop, incorporate and deliver effectively by selecting best location for reducing construction maintenance costs, cost-effective solutions for drainage, landslide, flood control. According to model findings, field study should be done for filling gaps and checking for errors. In future researches, this study can be extended for determining and preventing possible damage of Sensitive Areas and Vulnerable Zones supported with field investigations.Keywords: water resources management, hydro tool, water protection, transportation
Procedia PDF Downloads 5654 PsyVBot: Chatbot for Accurate Depression Diagnosis using Long Short-Term Memory and NLP
Authors: Thaveesha Dheerasekera, Dileeka Sandamali Alwis
Abstract:
The escalating prevalence of mental health issues, such as depression and suicidal ideation, is a matter of significant global concern. It is plausible that a variety of factors, such as life events, social isolation, and preexisting physiological or psychological health conditions, could instigate or exacerbate these conditions. Traditional approaches to diagnosing depression entail a considerable amount of time and necessitate the involvement of adept practitioners. This underscores the necessity for automated systems capable of promptly detecting and diagnosing symptoms of depression. The PsyVBot system employs sophisticated natural language processing and machine learning methodologies, including the use of the NLTK toolkit for dataset preprocessing and the utilization of a Long Short-Term Memory (LSTM) model. The PsyVBot exhibits a remarkable ability to diagnose depression with a 94% accuracy rate through the analysis of user input. Consequently, this resource proves to be efficacious for individuals, particularly those enrolled in academic institutions, who may encounter challenges pertaining to their psychological well-being. The PsyVBot employs a Long Short-Term Memory (LSTM) model that comprises a total of three layers, namely an embedding layer, an LSTM layer, and a dense layer. The stratification of these layers facilitates a precise examination of linguistic patterns that are associated with the condition of depression. The PsyVBot has the capability to accurately assess an individual's level of depression through the identification of linguistic and contextual cues. The task is achieved via a rigorous training regimen, which is executed by utilizing a dataset comprising information sourced from the subreddit r/SuicideWatch. The diverse data present in the dataset ensures precise and delicate identification of symptoms linked with depression, thereby guaranteeing accuracy. PsyVBot not only possesses diagnostic capabilities but also enhances the user experience through the utilization of audio outputs. This feature enables users to engage in more captivating and interactive interactions. The PsyVBot platform offers individuals the opportunity to conveniently diagnose mental health challenges through a confidential and user-friendly interface. Regarding the advancement of PsyVBot, maintaining user confidentiality and upholding ethical principles are of paramount significance. It is imperative to note that diligent efforts are undertaken to adhere to ethical standards, thereby safeguarding the confidentiality of user information and ensuring its security. Moreover, the chatbot fosters a conducive atmosphere that is supportive and compassionate, thereby promoting psychological welfare. In brief, PsyVBot is an automated conversational agent that utilizes an LSTM model to assess the level of depression in accordance with the input provided by the user. The demonstrated accuracy rate of 94% serves as a promising indication of the potential efficacy of employing natural language processing and machine learning techniques in tackling challenges associated with mental health. The reliability of PsyVBot is further improved by the fact that it makes use of the Reddit dataset and incorporates Natural Language Toolkit (NLTK) for preprocessing. PsyVBot represents a pioneering and user-centric solution that furnishes an easily accessible and confidential medium for seeking assistance. The present platform is offered as a modality to tackle the pervasive issue of depression and the contemplation of suicide.Keywords: chatbot, depression diagnosis, LSTM model, natural language process
Procedia PDF Downloads 6953 Educational Knowledge Transfer in Indigenous Mexican Areas Using Cloud Computing
Authors: L. R. Valencia Pérez, J. M. Peña Aguilar, A. Lamadrid Álvarez, A. Pastrana Palma, H. F. Valencia Pérez, M. Vivanco Vargas
Abstract:
This work proposes a Cooperation-Competitive (Coopetitive) approach that allows coordinated work among the Secretary of Public Education (SEP), the Autonomous University of Querétaro (UAQ) and government funds from National Council for Science and Technology (CONACYT) or some other international organizations. To work on an overall knowledge transfer strategy with e-learning over the Cloud, where experts in junior high and high school education, working in multidisciplinary teams, perform analysis, evaluation, design, production, validation and knowledge transfer at large scale using a Cloud Computing platform. Allowing teachers and students to have all the information required to ensure a homologated nationally knowledge of topics such as mathematics, statistics, chemistry, history, ethics, civism, etc. This work will start with a pilot test in Spanish and initially in two regional dialects Otomí and Náhuatl. Otomí has more than 285,000 speaking indigenes in Queretaro and Mexico´s central region. Náhuatl is number one indigenous dialect spoken in Mexico with more than 1,550,000 indigenes. The phase one of the project takes into account negotiations with indigenous tribes from different regions, and the Information and Communication technologies to deliver the knowledge to the indigenous schools in their native dialect. The methodology includes the following main milestones: Identification of the indigenous areas where Otomí and Náhuatl are the spoken dialects, research with the SEP the location of actual indigenous schools, analysis and inventory or current schools conditions, negotiation with tribe chiefs, analysis of the technological communication requirements to reach the indigenous communities, identification and inventory of local teachers technology knowledge, selection of a pilot topic, analysis of actual student competence with traditional education system, identification of local translators, design of the e-learning platform, design of the multimedia resources and storage strategy for “Cloud Computing”, translation of the topic to both dialects, Indigenous teachers training, pilot test, course release, project follow up, analysis of student requirements for the new technological platform, definition of a new and improved proposal with greater reach in topics and regions. Importance of phase one of the project is multiple, it includes the proposal of a working technological scheme, focusing in the cultural impact in Mexico so that indigenous tribes can improve their knowledge about new forms of crop improvement, home storage technologies, proven home remedies for common diseases, ways of preparing foods containing major nutrients, disclose strengths and weaknesses of each region, communicating through cloud computing platforms offering regional products and opening communication spaces for inter-indigenous cultural exchange.Keywords: Mexicans indigenous tribes, education, knowledge transfer, cloud computing, otomi, Náhuatl, language
Procedia PDF Downloads 40552 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank
Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh
Abstract:
Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI
Procedia PDF Downloads 11351 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College
Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa
Abstract:
This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling
Procedia PDF Downloads 23150 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task
Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes
Abstract:
For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.Keywords: Alzheimer's disease, keystroke logging, matching, writing process
Procedia PDF Downloads 36649 Including All Citizens Pathway (IACP): Transforming Post-Secondary Education Using Inclusion and Accessibility as Foundation
Authors: Fiona Whittington-Walsh
Abstract:
Including All Citizens Pathway (IACP) is addressing the systems wide discrimination that students with disabilities experience throughout the education system. IACP offers a wide, institutional support structure so that all students, including students with intellectual/developmental disabilities, are included and can succeed. The entire process from admissions, course selection, course instruction, graduation is designed to address systemic discrimination while supporting learners and faculty. The inclusive and accessible pedagogical model that is the foundation of IACP opens the doors of post-secondary education by making existing academic courses environments where all students can participate and succeed. IACP is about transforming teaching, not modifying, or adapting the curriculum or essential knowledge and skill sets that are required learning outcomes. Universal Design for Learning (UDL) principles are applied to instructional teaching strategies such as lectures, presentations, and assessment tools. Created in 2016 as a research pilot, IACP is one of the first fully inclusive for credit post-secondary options available. The pilot received numerous external and internal grants to support its initiative to investigate and assess the teaching strategies and techniques that support student learning of essential knowledge and skill sets. IACP pilot goals included: (1) provide a successful pilot as a model of inclusive and accessible pedagogy; (2) create a teacher’s guide to assist other instructors in transforming their teaching to reach a wide range of learners; (3) identify policy barriers located within the educational system; and (4) provide leadership and encouraging innovative and inclusive pedagogical practices. The pilot was a success and in 2020 the first cohort of students graduated with an exit credential that pre-exists IACP and consists of ten academic courses. The University has committed to continue IACP and has developed a sustainable model. Each new academic year a new cohort of IACP students starts their post-secondary educational journey, while two additional instructors are mentored with the pedagogy. The pedagogical foundation of IACP has far-reaching potential including, but not limited to, programs that offer services for international students whose first language is not English as well as influencing pedagogical reform in secondary and post-secondary education. IACP also supports universities in satisfying educational standards that are or will be included in accessibility/disability legislation. This session will present information about IACP, share examples of systems transformation, hear from students and instructors, and provide participatory experiential activities that demonstrate the transformative techniques. We will be drawing from the experiences of a recent course that explored research documenting the lived experiences of students with disabilities in post-secondary institutes in B.C (Whittington-Walsh). Students created theatrical scenes out of the data and presented it using Forum Theatre method. Forum Theatre was used to create conversations, challenge stereotypes, and build connections between ableism, disability justice, Indigeneity, and social policy.Keywords: disability justice, inclusive education, pedagogical transformation, systems transformation
Procedia PDF Downloads 848 Design Thinking and Project-Based Learning: Opportunities, Challenges, and Possibilities
Authors: Shoba Rathilal
Abstract:
High unemployment rates and a shortage of experienced and qualified employees appear to be a paradox that currently plagues most countries worldwide. In a developing country like South Africa, the rate of unemployment is reported to be approximately 35%, the highest recorded globally. At the same time, a countrywide deficit in experienced and qualified potential employees is reported in South Africa, which is causing fierce rivalry among firms. Employers have reported that graduates are very rarely able to meet the demands of the job as there are gaps in their knowledge and conceptual understanding and other 21st-century competencies, attributes, and dispositions required to successfully negotiate the multiple responsibilities of employees in organizations. In addition, the rates of unemployment and suitability of graduates appear to be skewed by race and social class, the continued effects of a legacy of inequitable educational access. Higher Education in the current technologically advanced and dynamic world needs to serve as an agent of transformation, aspiring to develop graduates to be creative, flexible, critical, and with entrepreneurial acumen. This requires that higher education curricula and pedagogy require a re-envisioning of our selection, sequencing, and pacing of the learning, teaching, and assessment. At a particular Higher education Institution in South Africa, Design Thinking and Project Based learning are being adopted as two approaches that aim to enhance the student experience through the provision of a “distinctive education” that brings together disciplinary knowledge, professional engagement, technology, innovation, and entrepreneurship. Using these methodologies forces the students to solve real-time applied problems using various forms of knowledge and finding innovative solutions that can result in new products and services. The intention is to promote the development of skills for self-directed learning, facilitate the development of self-awareness, and contribute to students being active partners in the application and production of knowledge. These approaches emphasize active and collaborative learning, teamwork, conflict resolution, and problem-solving through effective integration of theory and practice. In principle, both these approaches are extremely impactful. However, at the institution in this study, the implementation of the PBL and DT was not as “smooth” as anticipated. This presentation reports on the analysis of the implementation of these two approaches within higher education curricula at a particular university in South Africa. The study adopts a qualitative case study design. Data were generated through the use of surveys, evaluation feedback at workshops, and content analysis of project reports. Data were analyzed using document analysis, content, and thematic analysis. Initial analysis shows that the forces constraining the implementation of PBL and DT range from the capacity to engage with DT and PBL, both from staff and students, educational contextual realities of higher education institutions, administrative processes, and resources. At the same time, the implementation of DT and PBL was enabled through the allocation of strategic funding and capacity development workshops. These factors, however, could not achieve maximum impact. In addition, the presentation will include recommendations on how DT and PBL could be adapted for differing contexts will be explored.Keywords: design thinking, project based learning, innovative higher education pedagogy, student and staff capacity development
Procedia PDF Downloads 7747 SWOT Analysis on the Prospects of Carob Use in Human Nutrition: Crete, Greece
Authors: Georgios A. Fragkiadakis, Antonia Psaroudaki, Theodora Mouratidou, Eirini Sfakianaki
Abstract:
Research: Within the project "Actions for the optimal utilization of the potential of carob in the Region of Crete" which is financed-supervised by the Region, with collaboration of Crete University and Hellenic Mediterranean University, a SWOT (strengths, weaknesses, opportunities, threats) survey was carried out, to evaluate the prospects of carob in human nutrition, in Crete. Results and conclusions: 1). Strengths: There exists a local production of carob for human consumption, based on international reports, and local-product reports. The data on products in the market (over 100 brands of carob food), indicates a sufficiency of carob materials offered in Crete. The variety of carob food products retailed in Crete indicates a strong demand-production-consumption trend. There is a stable number (core) of businesses that invest significantly (Creta carob, Cretan mills, etc.). The great majority of the relevant food stores (bakery, confectionary etc.) do offer carob products. The presence of carob products produced in Crete is strong on the internet (over 20 main professionally designed websites). The promotion of the carob food-products is based on their variety and on a few historical elements connected with the Cretan diet. 2). Weaknesses: The international prices for carob seed affect the sector; the seed had an international price of €20 per kg in 2021-22 and fell to €8 in 2022, causing losses to carob traders. The local producers do not sort the carobs they deliver for processing, causing 30-40% losses of the product in the industry. The occasional high price triggers the collection of degraded raw material; large losses may emerge due to the action of insects. There are many carob trees whose fruits are not collected, e.g. in Apokoronas, Chania. The nutritional and commercial value of the wild carob fruits is very low. Carob trees-production is recorded by Greek statistical services as "other cultures" in combination with prickly pear i.e., creating difficulties in retrieving data. The percentage of carob used for human nutrition, in contrast to animal feeding, is not known. The exact imports of carob are not closely monitored. We have no data on the recycling of carob by-products in Crete. 3). Opportunities: The development of a culture of respect for carob trade may improve professional relations in the sector. Monitoring carob market and connecting production with retailing-industry needs may allow better market-stability. Raw material evaluation procedures may be implemented to maintain carob value-chain. The state agricultural services may be further involved in carob-health protection. The education of farmers on carob cultivation/management, can improve the quality of the product. The selection of local productive varieties, may improve the sustainability of the culture. Connecting the consumption of carob with health-food products, may create added value in the sector. The presence and extent of wild carob threes in Crete, represents, potentially, a target for grafting. 4). Threats: The annual fluctuation of carob yield challenges the programming of local food industry activities. Carob is a forest species also - there is danger of wrong classification of crops as forest areas, where land ownership is not clear.Keywords: human nutrition, carob food, SWOT analysis, crete, greece
Procedia PDF Downloads 9246 Chain Networks on Internationalization of SMEs: Co-Opetition Strategies in Agrifood Sector
Authors: Emilio Galdeano-Gómez, Juan C. Pérez-Mesa, Laura Piedra-Muñoz, María C. García-Barranco, Jesús Hernández-Rubio
Abstract:
The situation in which firms engage in simultaneous cooperation and competition with each other is a phenomenon known as co-opetition. This scenario has received increasing attention in business economics and management analyses. In the domain of supply chain networks and for small and medium-sized enterprises, SMEs, these strategies are of greater relevance given the complex environment of globalization and competition in open markets. These firms face greater challenges regarding technology and access to specific resources due to their limited capabilities and limited market presence. Consequently, alliances and collaborations with both buyers and suppliers prove to be key elements in overcoming these constraints. However, rivalry and competition are also regarded as major factors in successful internationalization processes, as they are drivers for firms to attain a greater degree of specialization and to improve efficiency, for example enabling them to allocate scarce resources optimally and providing incentives for innovation and entrepreneurship. The present work aims to contribute to the literature on SMEs’ internationalization strategies. The sample is constituted by a panel data of marketing firms from the Andalusian food sector and a multivariate regression analysis is developed, measuring variables of co-opetition and international activity. The hierarchical regression equations method has been followed, thus resulting in three estimated models: the first one excluding the variables indicative of channel type, while the latter two include the international retailer chain and wholesaler variable. The findings show that the combination of several factors leads to a complex scenario of inter-organizational relationships of cooperation and competition. In supply chain management analyses, these relationships tend to be classified as either buyer-supplier (vertical level) or supplier-supplier relationships (horizontal level). Several buyers and suppliers tend to participate in supply chain networks, and in which the form of governance (hierarchical and non-hierarchical) influences cooperation and competition strategies. For instance, due to their market power and/or their closeness to the end consumer, some buyers (e.g. large retailers in food markets) can exert an influence on the selection and interaction of several of their intermediate suppliers, thus endowing certain networks in the supply chain with greater stability. This hierarchical influence may in turn allow these suppliers to develop their capabilities (e.g. specialization) to a greater extent. On the other hand, for those suppliers that are outside these networks, this environment of hierarchy, characterized by a “hub firm” or “channel master”, may provide an incentive for developing their co-opetition relationships. These results prove that the analyzed firms have experienced considerable growth in sales to new foreign markets, mainly in Europe, dealing with large retail chains and wholesalers as main buyers. This supply industry is predominantly made up of numerous SMEs, which has implied a certain disadvantage when dealing with the buyers, as negotiations have traditionally been held on an individual basis and in the face of high competition among suppliers. Over recent years, however, cooperation among these marketing firms has become more common, for example regarding R&D, promotion, scheduling of production and sales.Keywords: co-petition networks, international supply chain, maketing agrifood firms, SMEs strategies
Procedia PDF Downloads 7945 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration
Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control
Procedia PDF Downloads 24844 The Perspective of British Politicians on English Identity: Qualitative Study of Parliamentary Debates, Blogs, and Interviews
Authors: Victoria Crynes
Abstract:
The question of England’s role in Britain is increasingly relevant due to the ongoing rise in citizens identifying as English. Furthermore, the Brexit Referendum was predominantly supported by constituents identifying as English. Few politicians appear to comprehend how Englishness is politically manifested. Politics and the media have depicted English identity as a negative and extremist problem - an inaccurate representation that ignores the breadth of English identifying citizens. This environment prompts the question, 'How are British Politicians Addressing the Modern English Identity Question?' Parliamentary debates, political blogs, and interviews are synthesized to establish a more coherent understanding of the current political attitudes towards English identity, the perceived nature of English identity, and the political manifestation of English representation and governance. Analyzed parliamentary debates addressed the democratic structure of English governance through topics such as English votes for English laws, devolution, and the union. The blogs examined include party-based, multi-author style blogs, and independently authored blogs by politicians, which provide a dynamic and up-to-date representation of party and politician viewpoints. Lastly, fourteen semi-structured interviews of British politicians provide a nuanced perspective on how politicians conceptualize Englishness. Interviewee selection was based on three criteria: (i) Members of Parliament (MP) known for discussing English identity politics, (ii) MPs of strongly English identifying constituencies, (iii) MPs with minimal English identity affiliation. Analysis of parliamentary debates reveals the discussion of English representation has gained little momentum. Many politicians fail to comprehend who the English are, why they desire greater representation and believe that increased recognition of the English would disrupt the unity of the UK. These debates highlight the disconnect of parliament from the disenfranchised English towns. A failure to recognize the legitimacy of English identity politics generates an inability for solution-focused debates to occur. Political blogs demonstrate cross-party recognition of growing English disenfranchisement. The dissatisfaction with British politics derives from multiple factors, including economic decline, shifting community structures, and the delay of Brexit. The left-behind communities have seen little response from Westminster, which is often contrasted to the devolved and louder voices of the other UK nations. Many blogs recognize the need for a political response to the English and lament the lack of party-level initiatives. In comparison, interviews depict an array of local-level initiatives reconnecting MPs to community members. Local efforts include town trips to Westminster, multi-cultural cooking classes, and English language courses. These efforts begin to rebuild positive, local narratives, promote engagement across community sectors, and acknowledge the English voices. These interviewees called for large-scale, political action. Meanwhile, several interviewees denied the saliency of English identity. For them, the term held only extremist narratives. The multi-level analysis reveals continued uncertainty on Englishness within British politics, contrasted with increased recognition of its saliency by politicians. It is paramount that politicians increase discussions on English identity politics to avoid increased alienation of English citizens and to rebuild trust in the abilities of Westminster.Keywords: British politics, contemporary identity politics and its impacts, English identity, English nationalism, identity politics
Procedia PDF Downloads 11443 Fuzzy Multi-Objective Approach for Emergency Location Transportation Problem
Authors: Bidzina Matsaberidze, Anna Sikharulidze, Gia Sirbiladze, Bezhan Ghvaberidze
Abstract:
In the modern world emergency management decision support systems are actively used by state organizations, which are interested in extreme and abnormal processes and provide optimal and safe management of supply needed for the civil and military facilities in geographical areas, affected by disasters, earthquakes, fires and other accidents, weapons of mass destruction, terrorist attacks, etc. Obviously, these kinds of extreme events cause significant losses and damages to the infrastructure. In such cases, usage of intelligent support technologies is very important for quick and optimal location-transportation of emergency service in order to avoid new losses caused by these events. Timely servicing from emergency service centers to the affected disaster regions (response phase) is a key task of the emergency management system. Scientific research of this field takes the important place in decision-making problems. Our goal was to create an expert knowledge-based intelligent support system, which will serve as an assistant tool to provide optimal solutions for the above-mentioned problem. The inputs to the mathematical model of the system are objective data, as well as expert evaluations. The outputs of the system are solutions for Fuzzy Multi-Objective Emergency Location-Transportation Problem (FMOELTP) for disasters’ regions. The development and testing of the Intelligent Support System were done on the example of an experimental disaster region (for some geographical zone of Georgia) which was generated using a simulation modeling. Four objectives are considered in our model. The first objective is to minimize an expectation of total transportation duration of needed products. The second objective is to minimize the total selection unreliability index of opened humanitarian aid distribution centers (HADCs). The third objective minimizes the number of agents needed to operate the opened HADCs. The fourth objective minimizes the non-covered demand for all demand points. Possibility chance constraints and objective constraints were constructed based on objective-subjective data. The FMOELTP was constructed in a static and fuzzy environment since the decisions to be made are taken immediately after the disaster (during few hours) with the information available at that moment. It is assumed that the requests for products are estimated by homeland security organizations, or their experts, based upon their experience and their evaluation of the disaster’s seriousness. Estimated transportation times are considered to take into account routing access difficulty of the region and the infrastructure conditions. We propose an epsilon-constraint method for finding the exact solutions for the problem. It is proved that this approach generates the exact Pareto front of the multi-objective location-transportation problem addressed. Sometimes for large dimensions of the problem, the exact method requires long computing times. Thus, we propose an approximate method that imposes a number of stopping criteria on the exact method. For large dimensions of the FMOELTP the Estimation of Distribution Algorithm’s (EDA) approach is developed.Keywords: epsilon-constraint method, estimation of distribution algorithm, fuzzy multi-objective combinatorial programming problem, fuzzy multi-objective emergency location/transportation problem
Procedia PDF Downloads 32142 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients
Authors: Ainura Tursunalieva, Irene Hudson
Abstract:
Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence
Procedia PDF Downloads 15241 Made on Land, Ends Up in the Water "I-Clare" Intelligent Remediation System for Removal of Harmful Contaminants in Water using Modified Reticulated Vitreous Carbon Foam
Authors: Sabina Żołędowska, Tadeusz Ossowski, Robert Bogdanowicz, Jacek Ryl, Paweł Rostkowski, Michał Kruczkowski, Michał Sobaszek, Zofia Cebula, Grzegorz Skowierzak, Paweł Jakóbczyk, Lilit Hovhannisyan, Paweł Ślepski, Iwona Kaczmarczyk, Mattia Pierpaoli, Bartłomiej Dec, Dawid Nidzworski
Abstract:
The circular economy of water presents a pressing environmental challenge in our society. Water contains various harmful substances, such as drugs, antibiotics, hormones, and dioxides, which can pose silent threats. Water pollution has severe consequences for aquatic ecosystems. It disrupts the balance of ecosystems by harming aquatic plants, animals, and microorganisms. Water pollution poses significant risks to human health. Exposure to toxic chemicals through contaminated water can have long-term health effects, such as cancer, developmental disorders, and hormonal imbalances. However, effective remediation systems can be implemented to remove these contaminants using electrocatalytic processes, which offer an environmentally friendly alternative to other treatment methods, and one of them is the innovative iCLARE system. The project's primary focus revolves around a few main topics: Reactor design and construction, selection of a specific type of reticulated vitreous carbon foams (RVC), analytical studies of harmful contaminants parameters and AI implementation. This high-performance electrochemical reactor will be build based on a novel type of electrode material. The proposed approach utilizes the application of reticulated vitreous carbon foams (RVC) with deposited modified metal oxides (MMO) and diamond thin films. The following setup is characterized by high surface area development and satisfactory mechanical and electrochemical properties, designed for high electrocatalytic process efficiency. The consortium validated electrode modification methods that are the base of the iCLARE product and established the procedures for the detection of chemicals detection: - deposition of metal oxides WO3 and V2O5-deposition of boron-doped diamond/nanowalls structures by CVD process. The chosen electrodes (porous Ferroterm electrodes) were stress tested for various parameters that might occur inside the iCLARE machine–corosis, the long-term structure of the electrode surface during electrochemical processes, and energetic efficacy using cyclic polarization and electrochemical impedance spectroscopy (before and after electrolysis) and dynamic electrochemical impedance spectroscopy (DEIS). This tool allows real-time monitoring of the changes at the electrode/electrolyte interphase. On the other hand, the toxicity of iCLARE chemicals and products of electrolysis are evaluated before and after the treatment using MARA examination (IBMM) and HPLC-MS-MS (NILU), giving us information about the harmfulness of using electrode material and the efficiency of iClare system in the disposal of pollutants. Implementation of data into the system that uses artificial intelligence and the possibility of practical application is in progress (SensDx).Keywords: waste water treatement, RVC, electrocatalysis, paracetamol
Procedia PDF Downloads 8840 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance
Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi
Abstract:
This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building
Procedia PDF Downloads 3239 Benchmarking of Petroleum Tanker Discharge Operations at a Nigerian Coastal Terminal and Jetty Facilitates Optimization of the Ship–Shore Interface
Authors: Bassey O. Bassey
Abstract:
Benchmarking has progressively become entrenched as a requisite activity for process improvement and enhancing service delivery at petroleum jetties and terminals, most especially during tanker discharge operations at the ship – shore interface, as avoidable delays result in extra operating costs, non-productive time, high demurrage payments and ultimate product scarcity. The jetty and terminal in focus had been operational for 3 and 8 years respectively, with proper operational and logistic records maintained to evaluate their progress over time in order to plan and implement modifications and review of procedures for greater technical and economic efficiency. Regular and emergency staff meetings were held on a team, departmental and company-wide basis to progressively address major challenges that were encountered during each operation. The process and outcome of the resultant collectively planned changes carried out within the past two years forms the basis of this paper, which mirrors the initiatives effected to enhance operational and maintenance excellence at the affected facilities. Operational modifications included a second cargo receipt line designated for gasoline, product loss control at jetty and shore ends, enhanced product recovery and quality control, and revival of terminal–jetty backloading operations. Logistic improvements were the incorporation of an internal logistics firm and shipping agency, fast tracking of discharge procedures for tankers, optimization of tank vessel selection process, and third party product receipt and throughput. Maintenance excellence was achieved through construction of two new lay barges and refurbishment of the existing one; revamping of existing booster pump and purchasing of a modern one as reserve capacity; extension of Phase 1 of the jetty to accommodate two vessels and construction of Phase 2 for two more vessels; regular inspection, draining, drying and replacement of cargo hoses; corrosion management program for all process facilities; and an improved, properly planned and documented maintenance culture. Safety, environmental and security compliance were enhanced by installing state-of-the-art fire fighting facilities and equipment, seawater intake line construction as backup for borehole at the terminal, remediation of the shoreline and marine structures, modern spill containment equipment, improved housekeeping and accident prevention practices, and installation of hi-technology security enhancements, among others. The end result has been observed over the past two years to include improved tanker turnaround time, higher turnover on product sales, consistent product availability, greater indigenous human capacity utilisation by way of direct hires and contracts, as well as customer loyalty. The lessons learnt from this exercise would, therefore, serve as a model to be adapted by other operators of similar facilities, contractors, academics and consultants in a bid to deliver greater sustainability and profitability of operations at the ship – shore interface to this strategic industry.Keywords: benchmarking, optimisation, petroleum jetty, petroleum terminal
Procedia PDF Downloads 36638 Reviving Customs: Examining the Vernacular Habitus in Modern Marathi Film via the Tamasha Genre
Authors: Amar Ramesh Wayal
Abstract:
Marathi cinema, an integral part of India’s diverse film industry, has significantly evolved in its storytelling and aesthetics, with the Tamasha genre being central to this evolution. Tamasha, a traditional form of Marathi theatre, features vibrant dance and music, especially the rhythmic and often suggestive musical genre, lavani. It gained cinematic prominence in the 1960s with Anant Mane’s Sangtye Aika (1959), which brought and popularized Tamasha to the silver screen, and V. Shantaram’s Pinjra (1972), an iconic Tamasha drama. Despite early success, Tamasha films declined in popularity until Natarang (2010) revitalized interest in this traditional form. This study examines the relevance and evolution of the Tamasha genre in Marathi cinema through contemporary films like Ek Hota Vidushak by Jabbar Patel (1992), Natarang (2010) by Ravi Jadhav, and Tamasha Live (2022) by Sanjay Jadhav. The selection of the films is based on their significant roles in the evolution of the Tamasha in Marathi cinema. Ek Hota Vidushak explores socio-political themes through Tamasha, Natarang depicts the struggles and emotional depth of Tamasha performers, and Tamasha Live integrates traditional Tamasha into modern cinema. By analysing films from different periods, this study highlights the genre’s reinterpretation and adaptation over time. The study employs a qualitative approach, utilizing textual analysis and cultural critique to examine the portrayal and evolution of Tamasha in selected films. It aims to illuminate the complex relationship between tradition and modernity in Marathi cinema through Foucauldian discourse analysis and Pierre Bourdieu’s concept of “vernacular habitus,” which refers to local, indigenous cultural spaces that shape people’s perceptions and expressions. By analyzing these films, the study seeks to understand how traditional cultural forms are integrated into contemporary cinematic narratives. However, this method has limitations, such as subjectivity in interpretation and the need for extensive contextual knowledge. Qualitative research can be subject to researcher bias, affecting analysis and conclusions. To mitigate this, this study maintains rigorous reflexivity and transparency regarding the researcher’s positionality. Furthermore, findings from specific film analyses may not be universally applicable to all Tamasha films or broader Marathi cinema. To enhance the study’s robustness, future research could incorporate comparative or quantitative data to complement qualitative insights. Despite these challenges, qualitative research is crucial for exploring cultural artifacts and their significance within specific contexts. By triangulating qualitative findings with diverse perspectives and acknowledging limitations, this study aims to provide a nuanced understanding of how Tamasha cinema preserves and revitalizes Maharashtra’s folk traditions while adapting them to contemporary contexts. Analyzing films by Jabbar Patel, Ravi Jadhav, and Sanjay Jadhav shows how these filmmakers balance traditional aesthetics with modern storytelling, bridging historical continuity with contemporary relevance. This study offers insights into how indigenous traditions like Tamasha continue to shape and define cinematic narratives in Maharashtra.Keywords: Marathi cinema, Tamasha genre, vernacular habitus, discourse analysis, cultural evolution
Procedia PDF Downloads 3237 Stakeholder Engagement to Address Urban Health Systems Gaps for Migrants
Authors: A. Chandra, M. Arthur, L. Mize, A. Pomeroy-Stevens
Abstract:
Background: Lower and middle-income countries (LMICs) in Asia face rapid urbanization resulting in both economic opportunities (the urban advantage) and emerging health challenges. Urban health risks are magnified in informal settlements and include infectious disease outbreaks, inadequate access to health services, and poor air quality. Over the coming years, urban spaces in Asia will face accelerating public health risks related to migration, climate change, and environmental health. These challenges are complex and require multi-sectoral and multi-stakeholder solutions. The Building Health Cities (BHC) program is funded by the United States Agency for International Development (USAID) to work with smart city initiatives in the Asia region. BHC approaches urban health challenges by addressing policies, planning, and services through a health equity lens, with a particular focus on informal settlements and migrant communities. The program works to develop data-driven decision-making, build inclusivity through stakeholder engagement, and facilitate the uptake of appropriate technology. Methodology: The BHC program has partnered with the smart city initiatives of Indore in India, Makassar in Indonesia, and Da Nang in Vietnam. Implementing partners support municipalities to improve health delivery and equity using two key approaches: political economy analysis and participatory systems mapping. Political economy analyses evaluate barriers to collective action, including corruption, security, accountability, and incentives. Systems mapping evaluates community health challenges using a cross-sectoral approach, analyzing the impact of economic, environmental, transport, security, health system, and built environment factors. The mapping exercise draws on the experience and expertise of a diverse cohort of stakeholders, including government officials, municipal service providers, and civil society organizations. Results: Systems mapping and political economy analyses identified significant barriers for health care in migrant populations. In Makassar, migrants are unable to obtain the necessary card that entitles them to subsidized health services. This finding is being used to engage with municipal governments to mitigate the barriers that limit migrant enrollment in the public social health insurance scheme. In Indore, the project identified poor drainage of storm and wastewater in migrant settlements as a cause of poor health. Unsafe and inadequate infrastructure placed residents of these settlements at risk for both waterborne diseases and injuries. The program also evaluated the capacity of urban primary health centers serving migrant communities, identifying challenges related to their hours of service and shortages of health workers. In Da Nang, the systems mapping process has only recently begun, with the formal partnership launched in December 2019. Conclusion: This paper explores lessons learned from BHC’s systems mapping, political economy analyses, and stakeholder engagement approaches. The paper shares progress related to the health of migrants in informal settlements. Case studies feature barriers identified and mitigating steps, including governance actions, taken by local stakeholders in partner cities. The paper includes an update on ongoing progress from Indore and Makassar and experience from the first six months of program implementation from Da Nang.Keywords: informal settlements, migration, stakeholder engagement mapping, urban health
Procedia PDF Downloads 11936 Analyzing Spatio-Structural Impediments in the Urban Trafficscape of Kolkata, India
Authors: Teesta Dey
Abstract:
Integrated Transport development with proper traffic management leads to sustainable growth of any urban sphere. Appropriate mass transport planning is essential for the populous cities in third world countries like India. The exponential growth of motor vehicles with unplanned road network is now the common feature of major urban centres in India. Kolkata, the third largest mega city in India, is not an exception of it. The imbalance between demand and supply of unplanned transport services in this city is manifested in the high economic and environmental costs borne by the associated society. With the passage of time, the growth and extent of passenger demand for rapid urban transport has outstripped proper infrastructural planning and causes severe transport problems in the overall urban realm. Hence Kolkata stands out in the world as one of the most crisis-ridden metropolises. The urban transport crisis of this city involves severe traffic congestion, the disparity in mass transport services on changing peripheral land uses, route overlapping, lowering of travel speed and faulty implementation of governmental plans as mostly induced by rapid growth of private vehicles on limited road space with huge carbon footprint. Therefore the paper will critically analyze the extant road network pattern for improving regional connectivity and accessibility, assess the degree of congestion, identify the deviation from demand and supply balance and finally evaluate the emerging alternate transport options as promoted by the government. For this purpose, linear, nodal and spatial transport network have been assessed based on certain selected indices viz. Road Degree, Traffic Volume, Shimbel Index, Direct Bus Connectivity, Average Travel and Waiting Tine Indices, Route Variety, Service Frequency, Bus Intensity, Concentration Analysis, Delay Rate, Quality of Traffic Transmission, Lane Length Duration Index and Modal Mix. Total 20 Traffic Intersection Points (TIPs) have been selected for the measurement of nodal accessibility. Critical Congestion Zones (CCZs) are delineated based on one km buffer zones of each TIP for congestion pattern analysis. A total of 480 bus routes are assessed for identifying the deficiency in network planning. Apart from bus services, the combined effects of other mass and para transit modes, containing metro rail, auto, cab and ferry services, are also analyzed. Based on systematic random sampling method, a total of 1500 daily urban passengers’ perceptions were studied for checking the ground realities. The outcome of this research identifies the spatial disparity among the 15 boroughs of the city with severe route overlapping and congestion problem. North and Central Kolkata-based mass transport services exceed the transport strength of south and peripheral Kolkata. Faulty infrastructural condition, service inadequacy, economic loss and workers’ inefficiency are the most dominant reasons behind the defective mass transport network plan. Hence there is an urgent need to revive the extant road based mass transport system of this city by implementing a holistic management approach by upgrading traffic infrastructure, designing new roads, better cooperation among different mass transport agencies, better coordination of transport and changing land use policies, large increase in funding and finally general passengers’ awareness.Keywords: carbon footprint, critical congestion zones, direct bus connectivity, integrated transport development
Procedia PDF Downloads 27335 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design
Authors: Sebastian Kehne, Alexander Epple, Werner Herfs
Abstract:
A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design
Procedia PDF Downloads 287