Search results for: Artificial Neural network
981 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Eva Laryea, Clement Yeboah Authors
Abstract:
A pretest-posttest within subjects, experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising, as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers, and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, experimental design, achievement, statistics-related anxiety
Procedia PDF Downloads 59980 Computational Investigation of V599 Mutations of BRAF Protein and Its Control over the Therapeutic Outcome under the Malignant Condition
Authors: Mayank, Navneet Kaur, Narinder Singh
Abstract:
The V599 mutations in the BRAF protein are extremely oncogenic, responsible for countless of malignant conditions. Along with wild type, V599E, V599D, and V599R are the important mutated variants of the BRAF proteins. The BRAF inhibitory anticancer agents are continuously developing, and sorafenib is a BRAF inhibitor that is under clinical use. The crystal structure of sorafenib bounded to wild type, and V599 is known, showing a similar interaction pattern in both the case. The mutated 599th residue, in both the case, is also found not interacting directly with the co-crystallized sorafenib molecule. However, the IC50 value of sorafenib was found extremely different in both the case, i.e., 22 nmol/L for wild and 38 nmol/L for V599E protein. Molecular docking study and MMGBSA binding energy results also revealed a significant difference in the binding pattern of sorafenib in both the case. Therefore, to explore the role of distinctively situated 599th residue, we have further conducted comprehensive computational studies. The molecular dynamics simulation, residue interaction network (RIN) analysis, and residue correlation study results revealed the importance of the 599th residue on the therapeutic outcome and overall dynamic of the BRAF protein. Therefore, although the position of 599th residue is very much distinctive from the ligand-binding cavity of BRAF, still it has exceptional control over the overall functional outcome of the protein. The insight obtained here may seem extremely important and guide us while designing ideal BRAF inhibitory anticancer molecules.Keywords: BRAF, oncogenic, sorafenib, computational studies
Procedia PDF Downloads 116979 The Potential for Maritime Tourism: An African Perspective
Authors: Lynn C. Jonas
Abstract:
The African continent is rich in coastal history, heritage, and culture, presenting immense potential for the development of maritime tourism. Shipping and its related components are generally associated with the maritime industry, and tourism’s link is to the various forms of nautical tourism. Activities may include cruising, yachting, visits to lighthouses, ports, harbors, and excursions to related sites of cultural, historical, or ecological significance. There have been hundreds of years of explorers leaving a string of shipwrecks along the various coastal areas on the continent in their pursuit of establishing trade routes between Europe, Africa, and the Far East. These shipwrecks present diving opportunities in artificial reefs and marine heritage to be explored in various ways in the maritime cultural zones. Along the South African coast, for example, six Portuguese shipwrecks highlight the Bartolomeu Dias legacy of exploration, and there are a number of warships in Tanzanian waters. Furthermore, decades of African countries being under colonized rule have left the continent with an intricate cultural heritage that is enmeshed in European language architecture interlinked with, in many instances, hard-fought independent littoral states. There is potential for coastal trails to be developed to follow these historical events as, at one point in history, France had colonized 35 African states, and subsequently, 32 African states were colonized by Britain. Countries such as Cameroon still have the legacy of Francophone versus Anglophone as a result of this shift in colonizers. Further to the colonized history of the African continent, there is an uncomfortable heritage of the slave trade history. To a certain extent, these coastal slave trade posts are being considered attractive to a niche tourism audience; however, there is potential for education and interpretive measures to grow this as a tourism product. Notwithstanding these potential opportunities, there are numerous challenges to consider, such as poor maritime infrastructure, maritime security concerns with issues such as piracy, transnational crimes including weapons and migrant smuggling, drug, and human trafficking. These and related maritime issues contribute to the concerns over the porous nature of African ocean gateways, adding to the security concerns for tourists. This theoretical paper will consider these trends and how they may contribute to the growth and development of maritime tourism on the African continent. African considerations of the growth potential of tourism in coastal and marine spaces are needed, particularly with a focus on embracing the continent's tumultuous past as part of its heritage. This has the potential to contribute to the creation of a sense of ownership of opportunities.Keywords: coastal trade routes, maritime tourism, shipwrecks, slave trade routes
Procedia PDF Downloads 22978 Enhancement of Hardness Related Properties of Grey Cast Iron Powder Reinforced AA7075 Metal Matrix Composites Through T6 and T8 Heat Treatments
Authors: S. S. Sharma, P. R. Prabhu, K. Jagannath, Achutha Kini U., Gowri Shankar M. C.
Abstract:
In present global scenario, aluminum alloys are coining the attention of many innovators as competing structural materials for automotive and space applications. Comparing to other challenging alloys, especially, 7xxx series aluminum alloys have been studied seriously because of their benefits such as moderate strength; better deforming characteristics, excellent chemical decay resistance, and affordable cost. 7075 Al-alloys have been used in the transportation industry for the fabrication of several types of automobile parts, such as wheel covers, panels and structures. It is expected that substitution of such aluminum alloys for steels will result in great improvements in energy economy, durability and recyclability. However, it is necessary to improve the strength and the formability levels at low temperatures in aluminium alloys for still better applications. Aluminum–Zinc–Magnesium with or without other wetting agent denoted as 7XXX series alloys are medium strength heat treatable alloys. Cu, Mn and Si are the other solute elements which contribute for the improvement in mechanical properties achievable by selecting and tailoring the suitable heat treatment process. On subjecting to suitable treatments like age hardening or cold deformation assisted heat treatments, known as low temperature thermomechanical treatments (LTMT) the challenging properties might be incorporated. T6 is the age hardening or precipitation hardening process with artificial aging cycle whereas T8 comprises of LTMT treatment aged artificially with X% cold deformation. When the cold deformation is provided after solution treatment, there is increase in hardness related properties such as wear resistance, yield and ultimate strength, toughness with the expense of ductility. During precipitation hardening both hardness and strength of the samples are increasing. Decreasing peak hardness value with increasing aging temperature is the well-known behavior of age hardenable alloys. The peak hardness value is further increasing when room temperature deformation is positively supported with age hardening known as thermomechanical treatment. Considering these aspects, it is intended to perform heat treatment and evaluate hardness, tensile strength, wear resistance and distribution pattern of reinforcement in the matrix. 2 to 2.5 and 3 to 3.5 times increase in hardness is reported in age hardening and LTMT treatments respectively as compared to as-cast composite. There was better distribution of reinforcements in the matrix, nearly two fold increase in strength levels and upto 5 times increase in wear resistance are also observed in the present study.Keywords: reinforcement, precipitation, thermomechanical, dislocation, strain hardening
Procedia PDF Downloads 312977 High Throughput LC-MS/MS Studies on Sperm Proteome of Malnad Gidda (Bos Indicus) Cattle
Authors: Kerekoppa Puttaiah Bhatta Ramesha, Uday Kannegundla, Praseeda Mol, Lathika Gopalakrishnan, Jagish Kour Reen, Gourav Dey, Manish Kumar, Sakthivel Jeyakumar, Arumugam Kumaresan, Kiran Kumar M., Thottethodi Subrahmanya Keshava Prasad
Abstract:
Spermatozoa are the highly specialized transcriptionally and translationally inactive haploid male gamete. The understanding of proteome of sperm is indispensable to explore the mechanism of sperm motility and fertility. Though there is a large number of human sperm proteomic studies, in-depth proteomic information on Bos indicus spermatozoa is not well established yet. Therefore, we illustrated the profile of sperm proteome in indigenous cattle, Malnad gidda (Bos Indicus), using high-resolution mass spectrometry. In the current study, two semen ejaculates from 3 breeding bulls were collected employing the artificial vaginal method. Using 45% percoll purification, spermatozoa cells were isolated. Protein was extracted using lysis buffer containing 2% Sodium Dodecyl Sulphate (SDS) and protein concentration was estimated. Fifty micrograms of protein from each individual were pooled for further downstream processing. Pooled sample was fractionated using SDS-Poly Acrylamide Gel Electrophoresis, which is followed by in-gel digestion. The peptides were subjected to C18 Stage Tip clean-up and analyzed in Orbitrap Fusion Tribrid mass spectrometer interfaced with Proxeon Easy-nano LC II system (Thermo Scientific, Bremen, Germany). We identified a total of 6773 peptides with 28426 peptide spectral matches, which belonged to 1081 proteins. Gene ontology analysis has been carried out to determine the biological processes, molecular functions and cellular components associated with sperm protein. The biological process chiefly represented our data is an oxidation-reduction process (5%), spermatogenesis (2.5%) and spermatid development (1.4%). The highlighted molecular functions are ATP, and GTP binding (14%) and the prominent cellular components most observed in our data were nuclear membrane (1.5%), acrosomal vesicle (1.4%), and motile cilium (1.3%). Seventeen percent of sperm proteins identified in this study were involved in metabolic pathways. To the best of our knowledge, this data represents the first total sperm proteome from indigenous cattle, Malnad Gidda. We believe that our preliminary findings could provide a strong base for the future understanding of bovine sperm proteomics.Keywords: Bos indicus, Malnad Gidda, mass spectrometry, spermatozoa
Procedia PDF Downloads 197976 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare
Authors: Piret Pernik
Abstract:
Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts
Procedia PDF Downloads 103975 The Minimum Patch Size Scale for Seagrass Canopy Restoration
Authors: Aina Barcelona, Carolyn Oldham, Jordi Colomer, Teresa Serra
Abstract:
The loss of seagrass meadows worldwide is being tackled by formulating coastal restoration strategies. Seagrass loss results in a network of vegetated patches which are barely interconnected, and consequently, the ecological services they provide may be highly compromised. Hence, there is a need to optimize coastal management efforts in order to implement successful restoration strategies, not only through modifying the architecture of the canopies but also by gathering together information on the hydrodynamic conditions of the seabeds. To obtain information on the hydrodynamics within the patches of vegetation, this study deals with the scale analysis of the minimum lengths of patch management strategies that can be effectively used on. To this aim, a set of laboratory experiments were conducted in a laboratory flume where the plant densities, patch lengths, and hydrodynamic conditions were varied to discern the vegetated patch lengths that can provide optimal ecosystem services for canopy development. Two possible patch behaviours based on the turbulent kinetic energy (TKE) production were determined: one where plants do not interact with the flow and the other where plants interact with waves and produce TKE. Furthermore, this study determines the minimum patch lengths that can provide successful management restoration. A canopy will produce TKE, depending on its density, the length of the vegetated patch, and the wave velocities. Therefore, a vegetated patch will produce plant-wave interaction under high wave velocities when it presents large lengths and high canopy densities.Keywords: seagrass, minimum patch size, turbulent kinetic energy, oscillatory flow
Procedia PDF Downloads 197974 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics
Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee
Abstract:
Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru
Procedia PDF Downloads 88973 A Multi Objective Reliable Location-Inventory Capacitated Disruption Facility Problem with Penalty Cost Solve with Efficient Meta Historic Algorithms
Authors: Elham Taghizadeh, Mostafa Abedzadeh, Mostafa Setak
Abstract:
Logistics network is expected that opened facilities work continuously for a long time horizon without any failure; but in real world problems, facilities may face disruptions. This paper studies a reliable joint inventory location problem to optimize cost of facility locations, customers’ assignment, and inventory management decisions when facilities face failure risks and doesn’t work. In our model we assume when a facility is out of work, its customers may be reassigned to other operational facilities otherwise they must endure high penalty costs associated with losing service. For defining the model closer to real world problems, the model is proposed based on p-median problem and the facilities are considered to have limited capacities. We define a new binary variable (Z_is) for showing that customers are not assigned to any facilities. Our problem involve a bi-objective model; the first one minimizes the sum of facility construction costs and expected inventory holding costs, the second one function that mention for the first one is minimizes maximum expected customer costs under normal and failure scenarios. For solving this model we use NSGAII and MOSS algorithms have been applied to find the pareto- archive solution. Also Response Surface Methodology (RSM) is applied for optimizing the NSGAII Algorithm Parameters. We compare performance of two algorithms with three metrics and the results show NSGAII is more suitable for our model.Keywords: joint inventory-location problem, facility location, NSGAII, MOSS
Procedia PDF Downloads 527972 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval
Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle
Abstract:
Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval
Procedia PDF Downloads 133971 Servitization in Machine and Plant Engineering: Leveraging Generative AI for Effective Product Portfolio Management Amidst Disruptive Innovations
Authors: Till Gramberg
Abstract:
In the dynamic world of machine and plant engineering, stagnation in the growth of new product sales compels companies to reconsider their business models. The increasing shift toward service orientation, known as "servitization," along with challenges posed by digitalization and sustainability, necessitates an adaptation of product portfolio management (PPM). Against this backdrop, this study investigates the current challenges and requirements of PPM in this industrial context and develops a framework for the application of generative artificial intelligence (AI) to enhance agility and efficiency in PPM processes. The research approach of this study is based on a mixed-method design. Initially, qualitative interviews with industry experts were conducted to gain a deep understanding of the specific challenges and requirements in PPM. These interviews were analyzed using the Gioia method, painting a detailed picture of the existing issues and needs within the sector. This was complemented by a quantitative online survey. The combination of qualitative and quantitative research enabled a comprehensive understanding of the current challenges in the practical application of machine and plant engineering PPM. Based on these insights, a specific framework for the application of generative AI in PPM was developed. This framework aims to assist companies in implementing faster and more agile processes, systematically integrating dynamic requirements from trends such as digitalization and sustainability into their PPM process. Utilizing generative AI technologies, companies can more quickly identify and respond to trends and market changes, allowing for a more efficient and targeted adaptation of the product portfolio. The study emphasizes the importance of an agile and reactive approach to PPM in a rapidly changing environment. It demonstrates how generative AI can serve as a powerful tool to manage the complexity of a diversified and continually evolving product portfolio. The developed framework offers practical guidelines and strategies for companies to improve their PPM processes by leveraging the latest technological advancements while maintaining ecological and social responsibility. This paper significantly contributes to deepening the understanding of the application of generative AI in PPM and provides a framework for companies to manage their product portfolios more effectively and adapt to changing market conditions. The findings underscore the relevance of continuous adaptation and innovation in PPM strategies and demonstrate the potential of generative AI for proactive and future-oriented business management.Keywords: servitization, product portfolio management, generative AI, disruptive innovation, machine and plant engineering
Procedia PDF Downloads 83970 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers
Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya
Abstract:
In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.Keywords: IVF, embryo, machine learning, time-lapse imaging data
Procedia PDF Downloads 94969 Nano-Filled Matrix Reinforced by Woven Carbon Fibers Used as a Sensor
Authors: K. Hamdi, Z. Aboura, W. Harizi, K. Khellil
Abstract:
Improving the electrical properties of organic matrix composites has been investigated in several studies. Thus, to extend the use of composites in more varied application, one of the actual barrier is their poor electrical conductivities. In the case of carbon fiber composites, organic matrix are in charge of the insulating properties of the resulting composite. However, studying the properties of continuous carbon fiber nano-filled composites is less investigated. This work tends to characterize the effect of carbon black nano-fillers on the properties of the woven carbon fiber composites. First of all, SEM observations were performed to localize the nano-particles. It showed that particles penetrated on the fiber zone (figure1). In fact, by reaching the fiber zone, the carbon black nano-fillers created network connectivity between fibers which means an easy pathway for the current. It explains the noticed improvement of the electrical conductivity of the composites by adding carbon black. This test was performed with the four points electrical circuit. It shows that electrical conductivity of 'neat' matrix composite passed from 80S/cm to 150S/cm by adding 9wt% of carbon black and to 250S/cm by adding 17wt% of the same nano-filler. Thanks to these results, the use of this composite as a strain gauge might be possible. By the way, the study of the influence of a mechanical excitation (flexion, tensile) on the electrical properties of the composite by recording the variance of an electrical current passing through the material during the mechanical testing is possible. Three different configuration were performed depending on the rate of carbon black used as nano-filler. These investigation could lead to develop an auto-instrumented material.Keywords: carbon fibers composites, nano-fillers, strain-sensors, auto-instrumented
Procedia PDF Downloads 412968 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution
Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda
Abstract:
This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation
Procedia PDF Downloads 149967 Critical Analysis of International Protections for Children from Sexual Abuse and Examination of Indian Legal Approach
Authors: Ankita Singh
Abstract:
Sex trafficking and child pornography are those kinds of borderless crimes which can not be effectively prevented only through the laws and efforts of one country because it requires a proper and smooth collaboration among countries. Eradication of international human trafficking syndicates, criminalisation of international cyber offenders, and effective ban on child pornography is not possible without applying effective universal laws; hence, continuous collaboration of all countries is much needed to adopt and routinely update these universal laws. Congregation of countries on an international platform is very necessary from time to time, where they can simultaneously adopt international agendas and create powerful universal laws to prevent sex trafficking and child pornography in this modern digital era. In the past, some international steps have been taken through The Convention on the Rights of the Child (CRC) and through The Optional Protocol to the Convention on the Rights of the Child on the Sale of Children, Child Prostitution, and Child Pornography, but in reality, these measures are quite weak and are not capable in effectively protecting children from sexual abuse in this modern & highly advanced digital era. The uncontrolled growth of artificial intelligence (AI) and its misuse, lack of proper legal jurisdiction over foreign child abusers and difficulties in their extradition, improper control over international trade of digital child pornographic content, etc., are some prominent issues which can only be controlled through some new, effective and powerful universal laws. Due to a lack of effective international standards and a lack of improper collaboration among countries, Indian laws are also not capable of taking effective actions against child abusers. This research will be conducted through both doctrinal as well as empirical methods. Various literary sources will be examined, and a questionnaire survey will be conducted to analyse the effectiveness of international standards and Indian laws against child pornography. Participants in this survey will be Indian University students. In this work, the existing international norms made for protecting children from sexual abuse will be critically analysed. It will explore why effective and strong collaboration between countries is required in modern times. It will be analysed whether existing international steps are enough to protect children from getting trafficked or being subjected to pornography, and if these steps are not found to be sufficient enough, then suggestions will be given on how international standards and protections can be made more effective and powerful in this digital era. The approach of India towards the existing international standards, the Indian laws to protect children from being subjected to pornography, and the contributions & capabilities of India in strengthening the international standards will also be analysed.Keywords: child pornography, prevention of children from sexual offences act, the optional protocol to the convention on the rights of the child on the sale of children, child prostitution and child pornography, the convention on the rights of the child
Procedia PDF Downloads 42966 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator
Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib
Abstract:
Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model
Procedia PDF Downloads 308965 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways
Authors: Anirudh Lahiri
Abstract:
Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.
Procedia PDF Downloads 49964 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery
Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa
Abstract:
In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.Keywords: air quality, modeling pollution, particulate matter, remote sensing
Procedia PDF Downloads 157963 The Impact of Quality Cost on Revenue Sharing in Supply Chain Management
Authors: Fayza M. Obied-Allah
Abstract:
Customer’ needs, quality, and value creation while reducing costs through supply chain management provides challenges and opportunities for companies and researchers. In the light of these challenges, modern ideas must contribute to counter these challenges and exploit opportunities. Perhaps this paper will be one of these contributions. This paper discusses the impact of the quality cost on revenue sharing as a most important incentive to configure business networks. No doubt that the costs directly affect the size of income generated by a business network, so this paper investigates the impact of quality costs on business networks revenue, and their impact on the decision to participate the revenue among the companies in the supply chain. This paper develops the quality cost approach to align with the modern era, the developed model includes five categories besides the well-known four categories (namely prevention costs, appraisal costs, internal failure costs, and external failure costs), a new category has been developed in this research as a new vision of the relationship between quality costs and innovations of industry. This new category is Recycle Cost. This paper is organized into six sections, Section I shows quality costs overview in the supply chain. Section II discusses revenue sharing between the parties in supply chain. Section III investigates the impact of quality costs in revenue sharing decision between partners in supply chain. The fourth section includes survey study and presents statistical results. Section V discusses the results and shows future opportunities for research. Finally, Section VI summarizes the theoretical and practical results of this paper.Keywords: quality cost, recycle cost, revenue sharing, supply chain management
Procedia PDF Downloads 449962 Ways to Sustaining Self-Care of Thai Community Women to Achieve Future Healthy Aging
Authors: Manee Arpanantikul, Pennapa Unsanit, Dolrat Rujiwatthanakorn, Aporacha Lumdubwong
Abstract:
In order to continuously perform self-care based on the sufficiency economy philosophy for the length of women’s lives is not easy. However, there are different ways that women can use to carry out self-care activities regularly. Some women individually perform self-care while others perform self-care in groups. Little is known about ways to sustaining self-care of women based on the fundamental principle of Thai culture. The purpose of this study was to investigate ways to sustaining self-care based on the sufficiency economy philosophy of Thai middle-aged women living in the community in order to achieve future healthy aging. This study employed a qualitative research design. Twenty women who were willing to participate in this study were recruited. Data collection were conducted through in-depth interviews with tape recording, doing field notes, and observation. All interviews were transcribed verbatim, and data were analyzed by using content analysis. The findings showed ways to sustaining self-care of Thai community women to achieve future healthy aging consisting of 7 themes: 1) having determination, 2) having a model, 3) developing a leader, 4) carrying on performing activities, 5) setting up rules, 6) building self-care culture, and 7) developing a self-care group/network. The findings of this study suggested that in order to achieve self-care sustainability women should get to know themselves, have intention and belief, together with having the power of community and support. Therefore, having self-care constantly will prevent disease and promote healthy in women’s lives.Keywords: qualitative research, sufficiency economy philosophy, Thai middle-aged women, ways to sustaining self-care
Procedia PDF Downloads 376961 An End-to-end Piping and Instrumentation Diagram Information Recognition System
Authors: Taekyong Lee, Joon-Young Kim, Jae-Min Cha
Abstract:
Piping and instrumentation diagram (P&ID) is an essential design drawing describing the interconnection of process equipment and the instrumentation installed to control the process. P&IDs are modified and managed throughout a whole life cycle of a process plant. For the ease of data transfer, P&IDs are generally handed over from a design company to an engineering company as portable document format (PDF) which is hard to be modified. Therefore, engineering companies have to deploy a great deal of time and human resources only for manually converting P&ID images into a computer aided design (CAD) file format. To reduce the inefficiency of the P&ID conversion, various symbols and texts in P&ID images should be automatically recognized. However, recognizing information in P&ID images is not an easy task. A P&ID image usually contains hundreds of symbol and text objects. Most objects are pretty small compared to the size of a whole image and are densely packed together. Traditional recognition methods based on geometrical features are not capable enough to recognize every elements of a P&ID image. To overcome these difficulties, state-of-the-art deep learning models, RetinaNet and connectionist text proposal network (CTPN) were used to build a system for recognizing symbols and texts in a P&ID image. Using the RetinaNet and the CTPN model carefully modified and tuned for P&ID image dataset, the developed system recognizes texts, equipment symbols, piping symbols and instrumentation symbols from an input P&ID image and save the recognition results as the pre-defined extensible markup language format. In the test using a commercial P&ID image, the P&ID information recognition system correctly recognized 97% of the symbols and 81.4% of the texts.Keywords: object recognition system, P&ID, symbol recognition, text recognition
Procedia PDF Downloads 153960 Cybersecurity Assessment of Decentralized Autonomous Organizations in Smart Cities
Authors: Claire Biasco, Thaier Hayajneh
Abstract:
A smart city is the integration of digital technologies in urban environments to enhance the quality of life. Smart cities capture real-time information from devices, sensors, and network data to analyze and improve city functions such as traffic analysis, public safety, and environmental impacts. Current smart cities face controversy due to their reliance on real-time data tracking and surveillance. Internet of Things (IoT) devices and blockchain technology are converging to reshape smart city infrastructure away from its centralized model. Connecting IoT data to blockchain applications would create a peer-to-peer, decentralized model. Furthermore, blockchain technology powers the ability for IoT device data to shift from the ownership and control of centralized entities to individuals or communities with Decentralized Autonomous Organizations (DAOs). In the context of smart cities, DAOs can govern cyber-physical systems to have a greater influence over how urban services are being provided. This paper will explore how the core components of a smart city now apply to DAOs. We will also analyze different definitions of DAOs to determine their most important aspects in relation to smart cities. Both categorizations will provide a solid foundation to conduct a cybersecurity assessment of DAOs in smart cities. It will identify the benefits and risks of adopting DAOs as they currently operate. The paper will then provide several mitigation methods to combat cybersecurity risks of DAO integrations. Finally, we will give several insights into what challenges will be faced by DAO and blockchain spaces in the coming years before achieving a higher level of maturity.Keywords: blockchain, IoT, smart city, DAO
Procedia PDF Downloads 123959 Quince Seed Mucilage (QSD)/ Multiwall Carbonano Tube Hybrid Hydrogels as Novel Controlled Drug Delivery Systems
Authors: Raouf Alizadeh, Kadijeh Hemmati
Abstract:
The aim of this study is to synthesize several series of hydrogels from combination of a natural based polymer (Quince seed mucilage QSD), a synthetic copolymer contained methoxy poly ethylene glycol -polycaprolactone (mPEG-PCL) in the presence of different amount of multi-walled carbon nanotube (f-MWNT). Mono epoxide functionalized mPEG (mP EG-EP) was synthesized and reacted with sodium azide in the presence of NH4Cl to afford mPEG- N3(-OH). Then ring opening polymerization (ROP) of ε–caprolactone (CL) in the presence of mPEG- N3(-OH) as initiator and Sn(Oct)2 as catalyst led to preparation of mPEG-PCL- N3(-OH ) which was grafted onto propagylated f-MWNT by the click reaction to obtain mPEG-PCL- f-MWNT (-OH ). In the presence of mPEG- N3(-Br) and mixture of NHS/DCC/ QSD, hybrid hydrogels were successfully synthesized. The copolymers and hydrogels were characterized using different techniques such as, scanning electron microscope (SEM) and thermogravimetric analysis (TGA). The gel content of hydrogels showed dependence on the weight ratio of QSD:mPEG-PCL:f-MWNT. The swelling behavior of the prepared hydrogels was also studied under variation of pH, immersion time, and temperature. According to the results, the swelling behavior of the prepared hydrogels showed significant dependence in the gel content, pH, immersion time and temperature. The highest swelling was observed at room temperature, in 60 min and at pH 8. The loading and in-vitro release of quercetin as a model drug were investigated at pH of 2.2 and 7.4, and the results showed that release rate at pH 7.4 was faster than that at pH 2.2. The total loading and release showed dependence on the network structure of hydrogels and were in the range of 65- 91%. In addition, the cytotoxicity and release kinetics of the prepared hydrogels were also investigated.Keywords: antioxidant, drug delivery, Quince Seed Mucilage(QSD), swelling behavior
Procedia PDF Downloads 321958 Development of Mobile Application for Internship Program Management Using the Concept of Model View Controller (MVC) Pattern
Authors: Shutchapol Chopvitayakun
Abstract:
Nowadays, especially for the last 5 years, mobile devices, mobile applications and mobile users, through the deployment of wireless communication and mobile phone cellular network, all these components are growing significantly bigger and stronger. They are being integrated into each other to create multiple purposes and pervasive deployments into every business and non-business sector such as education, medicine, traveling, finance, real estate and many more. Objective of this study was to develop a mobile application for seniors or last-year students who enroll the internship program at each tertiary school (undergraduate school) and do onsite practice at real field sties, real organizations and real workspaces. During the internship session, all students as the interns are required to exercise, drilling and training onsite with specific locations and specific tasks or may be some assignments from their supervisor. Their work spaces are both private and government corporates and enterprises. This mobile application is developed under schema of a transactional processing system that enables users to keep daily work or practice log, monitor true working locations and ability to follow daily tasks of each trainee. Moreover, it provides useful guidance from each intern’s advisor, in case of emergency. Finally, it can summarize all transactional data then calculate each internship cumulated hours from the field practice session for each individual intern.Keywords: internship, mobile application, Android OS, smart phone devices, mobile transactional processing system, guidance and monitoring, tertiary education, senior students, model view controller (MVC)
Procedia PDF Downloads 315957 Educational Fieldworks towards Urban Biodiversity Preservation: Case Study of Japanese Gardens Management of Kanazawa City, Japan
Authors: Aida Mammadova, Juan Pastor Ivars
Abstract:
Japanese gardens can be considered as the unique hubs to preserve urban biodiversity, as they provide the habitat for the diverse network of living organisms, facilitating to the movement of the rare species around the urban landscape, became the refuge for the moss and many endangered species. For the centuries, Japanese gardens were considered as ecologically sustainable and well-organized ecosystems, due to the skilled maintenances and management. However, unfortunately, due to the depopulations and ageing in Japanese societies, gardens are becoming more abandoned, and there is an urgent need to increase the awareness about the importance of the Japanese gardens to preserve the urban biodiversity. In this study, we have conducted the participatory educational field trips for 12 students into the to the five gardens protected by Kanazawa City and learned about the preservation activities conducted at the governmental, municipal, and local levels. After the courses, students have found a strong linkage between the gardens with the traditional culture. Kanazawa City, for more than 400 years is famous with traditional craft makings and tea ceremonies, and it was noticed that the cultural diversity of the city was strongly supported by the biodiversity of the gardens, and loss of the gardens would bring to the loss of the traditional culture. Using the experiential approach during the fieldworks, it was observed by the students that the linkage between the bio-cultural diversity strongly depends on humans’ activities. The continuous management and maintenance of the gardens are the contributing factor for the preservation of urban diversity. However, garden management is very time and capital consuming process, and it was also noticed that there is a big need to attract all levels of the society to preserve the urban biodiversity through the participatory urbanism.Keywords: biodiversity, conservation, educational fieldwork, Japanese gardens
Procedia PDF Downloads 212956 The Neuropsychology of Obsessive Compulsion Disorder
Authors: Mia Bahar, Özlem Bozkurt
Abstract:
Obsessive-compulsive disorder (OCD) is a typical, persistent, and long-lasting mental health condition in which a person experiences uncontrollable, recurrent thoughts (or "obsessions") and/or activities (or "compulsions") that they feel compelled to engage in repeatedly. Obsessive-compulsive disorder is both underdiagnosed and undertreated. It frequently manifests in a variety of medical settings and is persistent, expensive, and burdensome. Obsessive-compulsive neurosis was long believed to be a condition that offered valuable insight into the inner workings of the unconscious mind. Obsessive-compulsive disorder is now recognized as a prime example of a neuropsychiatric condition susceptible to particular pharmacotherapeutic and psychotherapy therapies and mediated by pathology in particular neural circuits. An obsessive-compulsive disorder which is called OCD, usually has two components, one cognitive and the other behavioral, although either can occur alone. Obsessions are often repetitive and intrusive thoughts that invade consciousness. These obsessions are incredibly hard to control or dismiss. People who have OCD often engage in rituals to reduce anxiety associated with intrusive thoughts. Once the ritual is formed, the person may feel extreme relief and be free from anxiety until the thoughts of contamination intrude once again. These thoughts are strengthened through a manifestation of negative reinforcement because they allow the person to avoid anxiety and obscurity. These thoughts are described as autogenous, meaning they most likely come from nowhere. These unwelcome thoughts are related to actions which we can describe as Thought Action Fusion. The thought becomes equated with an action, such as if they refuse to perform the ritual, something bad might happen, and so people perform the ritual to escape the intrusive thought. In almost all cases of OCD, the person's life gets extremely disturbed by compulsions and obsessions. Studies show OCD is an estimated 1.1% prevalence, making it a challenging issue with high co-morbidities with other issues like depressive episodes, panic disorders, and specific phobias. The first to reveal brain anomalies in OCD were numerous CT investigations, although the results were inconsistent. A few studies have focused on the orbitofrontal cortex (OFC), anterior cingulate gyrus (AC), and thalamus, structures also implicated in the pathophysiology of OCD by functional neuroimaging studies, but few have found consistent results. However, some studies have found abnormalities in the basal ganglion. There have also been some discussions that OCD might be genetic. OCD has been linked to families in studies of family aggregation, and findings from twin studies show that this relationship is somewhat influenced by genetic variables. Some Research has shown that OCD is a heritable, polygenic condition that can result from de novo harmful mutations as well as common and unusual variants. Numerous studies have also presented solid evidence in favor of a significant additive genetic component to OCD risk, with distinct OCD symptom dimensions showing both common and individual genetic risks.Keywords: compulsions, obsessions, neuropsychiatric, genetic
Procedia PDF Downloads 65955 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind System: Case Study
Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar
Abstract:
Having a daylit space together with view results in a pleasant and productive environment for office employees. A daylit space is a space which utilizes daylight as a basic source of illumination to fulfill user’s visual demands and minimizes the electric energy consumption. Malaysian weather is hot and humid all over the year because of its location in the equatorial belt. however, because most of the commercial buildings in Malaysia are air-conditioned, huge glass windows are normally installed in order to keep the physical and visual relation between inside and outside. As a result of climatic situation and mentioned new trend, an ordinary office has huge heat gain, glare, and discomfort for occupants. Balancing occupant’s comfort and energy conservation in a tropical climate is a real challenge. This study concentrates on evaluating a venetian blind system using per pixel analyzing tools based on the suggested cut-out metrics by the literature. Workplace area in a private office room has been selected as a case study. Eight-day measurement experiment was conducted to investigate the effect of different venetian blind angles in an office area under daylight conditions in Serdang, Malaysia. The study goal was to explore daylight comfort of a commercially available venetian blind system, its’ daylight sufficiency and excess (8:00 AM to 5 PM) as well as Glare examination. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based Evalglare and hdrscope help to investigate luminance-based metrics. The main key factors are illuminance and luminance levels, mean and maximum luminance, daylight glare probability (DGP) and luminance ratio of the selected mask regions. The findings show that in most cases, morning session needs artificial lighting in order to achieve daylight comfort. However, in some conditions (e.g. 10° and 40° slat angles) in the second half of day the workplane illuminance level exceeds the maximum of 2000 lx. Generally, a rising trend is discovered toward mean window luminance and the most unpleasant cases occur after 2 P.M. Considering the luminance criteria rating, the uncomfortable conditions occur in the afternoon session. Surprisingly in no blind condition, extreme case of window/task ratio is not common. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment.Keywords: daylighting, energy simulation, office environment, Venetian blind
Procedia PDF Downloads 260954 Heat Vulnerability Index (HVI) Mapping in Extreme Heat Days Coupled with Air Pollution Using Principal Component Analysis (PCA) Technique: A Case Study of Amiens, France
Authors: Aiman Mazhar Qureshi, Ahmed Rachid
Abstract:
Extreme heat events are emerging human environmental health concerns in dense urban areas due to anthropogenic activities. High spatial and temporal resolution heat maps are important for urban heat adaptation and mitigation, helping to indicate hotspots that are required for the attention of city planners. The Heat Vulnerability Index (HVI) is the important approach used by decision-makers and urban planners to identify heat-vulnerable communities and areas that require heat stress mitigation strategies. Amiens is a medium-sized French city, where the average temperature has been increasing since the year 2000 by +1°C. Extreme heat events are recorded in the month of July for the last three consecutive years, 2018, 2019 and 2020. Poor air quality, especially ground-level ozone, has been observed mainly during the same hot period. In this study, we evaluated the HVI in Amiens during extreme heat days recorded last three years (2018,2019,2020). The Principal Component Analysis (PCA) technique is used for fine-scale vulnerability mapping. The main data we considered for this study to develop the HVI model are (a) socio-economic and demographic data; (b) Air pollution; (c) Land use and cover; (d) Elderly heat-illness; (e) socially vulnerable; (f) Remote sensing data (Land surface temperature (LST), mean elevation, NDVI and NDWI). The output maps identified the hot zones through comprehensive GIS analysis. The resultant map shows that high HVI exists in three typical areas: (1) where the population density is quite high and the vegetation cover is small (2) the artificial surfaces (built-in areas) (3) industrial zones that release thermal energy and ground-level ozone while those with low HVI are located in natural landscapes such as rivers and grasslands. The study also illustrates the system theory with a causal diagram after data analysis where anthropogenic activities and air pollution appear in correspondence with extreme heat events in the city. Our suggested index can be a useful tool to guide urban planners and municipalities, decision-makers and public health professionals in targeting areas at high risk of extreme heat and air pollution for future interventions adaptation and mitigation measures.Keywords: heat vulnerability index, heat mapping, heat health-illness, remote sensing, urban heat mitigation
Procedia PDF Downloads 151953 An Exploration of Why Insider Fraud Is the Biggest Threat to Your Business
Authors: Claire Norman-Maillet
Abstract:
Insider fraud, otherwise known as occupational, employee, or internal fraud, is a financial crime threat. Perpetrated by defrauding (or attempting to defraud) one’s current, prospective, or past employer, an ‘employee’ covers anyone employed by the company, including board members and contractors. The Coronavirus pandemic has forced insider fraud into the spotlight, and it isn’t dimming. As the focus of most academics and practitioners has historically been on that of ‘external fraud’, insider fraud is often overlooked or not considered to be a real threat. However, since COVID-19 changed the working world, pushing most of us into remote or hybrid working, employers cannot easily keep an eye on what their staff are doing, which has led to reliance on trust and transparency. This, therefore, brings about an increased risk of insider fraud perpetration. The objective of this paper is to explore why insider fraud is, therefore, now the biggest threat to a business. To achieve the research objective, participating individuals within the financial crime sector (either as a practitioner or consultants) attended semi-structured interviews with the researcher. The principal recruitment strategy for these individuals was via the researcher’s LinkedIn network. The main findings in the research suggest that insider fraud has been ignored and rejected as a threat to a business, owing to a reluctance to admit that a colleague may perpetrate. A positive of the Coronavirus pandemic is that it has forced insider fraud into a more prominent position and giving it more importance on a business’ agenda and risk register. Despite insider fraud always having been a possibility (and therefore a risk) within any business, it is very rare that a business has given it the attention it requires until now, if at all. The research concludes that insider fraud needs to prioritised by all businesses, and even ahead of external fraud. The research also provides advice on how a business can add new or enhance existing controls to mitigate the risk.Keywords: insider fraud, occupational fraud, COVID-19, COVID, coronavirus, pandemic, internal fraud, financial crime, economic crime
Procedia PDF Downloads 67952 Systematic Review of Digital Interventions to Reduce the Carbon Footprint of Primary Care
Authors: Anastasia Constantinou, Panayiotis Laouris, Stephen Morris
Abstract:
Background: Climate change has been reported as one of the worst threats to healthcare. The healthcare sector is a significant contributor to greenhouse gas emissions with primary care being responsible for 23% of the NHS’ total carbon footprint. Digital interventions, primarily focusing on telemedicine, offer a route to change. This systematic review aims to quantify and characterize the carbon footprint savings associated with the implementation of digital interventions in the setting of primary care. Methods: A systematic review of published literature was conducted according to PRISMA (Preferred Reporting Item for Systematic Reviews and Meta-Analyses) guidelines. MEDLINE, PubMed, and Scopus databases as well as Google scholar were searched using key terms relating to “carbon footprint,” “environmental impact,” “sustainability”, “green care”, “primary care,”, and “general practice,” using citation tracking to identify additional articles. Data was extracted and analyzed in Microsoft Excel. Results: Eight studies were identified conducted in four different countries between 2010 and 2023. Four studies used interventions to address primary care services, three studies focused on the interface between primary and specialist care, and one study addressed both. Digital interventions included the use of mobile applications, online portals, access to electronic medical records, electronic referrals, electronic prescribing, video-consultations and use of autonomous artificial intelligence. Only one study carried out a complete life cycle assessment to determine the carbon footprint of the intervention. It estimate that digital interventions reduced the carbon footprint at primary care level by 5.1 kgCO2/visit, and at the interface with specialist care by 13.4 kg CO₂/visit. When assessing the relationship between travel-distance saved and savings in emissions, we identified a strong correlation, suggesting that most of the carbon footprint reduction is attributed to reduced travel. However, two studies also commented on environmental savings associated with reduced use of paper. Patient savings in the form of reduced fuel cost and reduced travel time were also identified. Conclusion: All studies identified significant reductions in carbon footprint following implementation of digital interventions. In the future, controlled, prospective studies incorporating complete life cycle assessments and accounting for double-consulting effects, use of additional resources, technical failures, quality of care and cost-effectiveness are needed to fully appreciate the sustainable benefit of these interventionsKeywords: carbon footprint, environmental impact, primary care, sustainable healthcare
Procedia PDF Downloads 63