Search results for: ethical sensitivity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2682

Search results for: ethical sensitivity

792 Infrared Lightbox and iPhone App for Improving Detection Limit of Phosphate Detecting Dip Strips

Authors: H. Heidari-Bafroui, B. Ribeiro, A. Charbaji, C. Anagnostopoulos, M. Faghri

Abstract:

In this paper, we report the development of a portable and inexpensive infrared lightbox for improving the detection limits of paper-based phosphate devices. Commercial paper-based devices utilize the molybdenum blue protocol to detect phosphate in the environment. Although these devices are easy to use and have a long shelf life, their main deficiency is their low sensitivity based on the qualitative results obtained via a color chart. To improve the results, we constructed a compact infrared lightbox that communicates wirelessly with a smartphone. The system measures the absorbance of radiation for the molybdenum blue reaction in the infrared region of the spectrum. It consists of a lightbox illuminated by four infrared light-emitting diodes, an infrared digital camera, a Raspberry Pi microcontroller, a mini-router, and an iPhone to control the microcontroller. An iPhone application was also developed to analyze images captured by the infrared camera in order to quantify phosphate concentrations. Additionally, the app connects to an online data center to present a highly scalable worldwide system for tracking and analyzing field measurements. In this study, the detection limits for two popular commercial devices were improved by a factor of 4 for the Quantofix devices (from 1.3 ppm using visible light to 300 ppb using infrared illumination) and a factor of 6 for the Indigo units (from 9.2 ppm to 1.4 ppm) with repeatability of less than or equal to 1.2% relative standard deviation (RSD). The system also provides more granular concentration information compared to the discrete color chart used by commercial devices and it can be easily adapted for use in other applications.

Keywords: infrared lightbox, paper-based device, phosphate detection, smartphone colorimetric analyzer

Procedia PDF Downloads 123
791 Development of a Sensitive Electrochemical Sensor Based on Carbon Dots and Graphitic Carbon Nitride for the Detection of 2-Chlorophenol and Arsenic

Authors: Theo H. G. Moundzounga

Abstract:

Arsenic and 2-chlorophenol are priority pollutants that pose serious health threats to humans and ecology. An electrochemical sensor, based on graphitic carbon nitride (g-C₃N₄) and carbon dots (CDs), was fabricated and used for the determination of arsenic and 2-chlorophenol. The g-C₃N₄/CDs nanocomposite was prepared via microwave irradiation heating method and was dropped-dried on the surface of the glassy carbon electrode (GCE). Transmission electron microscopy (TEM), X-ray diffraction (XRD), photoluminescence (PL), Fourier transform infrared spectroscopy (FTIR), UV-Vis diffuse reflectance spectroscopy (UV-Vis DRS) were used for the characterization of structure and morphology of the nanocomposite. Electrochemical characterization was done by electrochemical impedance spectroscopy (EIS) and cyclic voltammetry (CV). The electrochemical behaviors of arsenic and 2-chlorophenol on different electrodes (GCE, CDs/GCE, and g-C₃N₄/CDs/GCE) was investigated by differential pulse voltammetry (DPV). The results demonstrated that the g-C₃N₄/CDs/GCE significantly enhanced the oxidation peak current of both analytes. The analytes detection sensitivity was greatly improved, suggesting that this new modified electrode has great potential in the determination of trace level of arsenic and 2-chlorophenol. Experimental conditions which affect the electrochemical response of arsenic and 2-chlorophenol were studied, the oxidation peak currents displayed a good linear relationship to concentration for 2-chlorophenol (R²=0.948, n=5) and arsenic (R²=0.9524, n=5), with a linear range from 0.5 to 2.5μM for 2-CP and arsenic and a detection limit of 2.15μM and 0.39μM respectively. The modified electrode was used to determine arsenic and 2-chlorophenol in spiked tap and effluent water samples by the standard addition method, and the results were satisfying. According to the measurement, the new modified electrode is a good alternative as chemical sensor for determination of other phenols.

Keywords: electrochemistry, electrode, limit of detection, sensor

Procedia PDF Downloads 145
790 A Vehicle Detection and Speed Measurement Algorithm Based on Magnetic Sensors

Authors: Panagiotis Gkekas, Christos Sougles, Dionysios Kehagias, Dimitrios Tzovaras

Abstract:

Cooperative intelligent transport systems (C-ITS) can greatly improve safety and efficiency in road transport by enabling communication, not only between vehicles themselves but also between vehicles and infrastructure. For that reason, traffic surveillance systems on the road are of great importance. This paper focuses on the development of an on-road unit comprising several magnetic sensors for real-time vehicle detection, movement direction, and speed measurement calculations. Magnetic sensors can feel and measure changes in the earth’s magnetic field. Vehicles are composed of many parts with ferromagnetic properties. Depending on sensors’ sensitivity, changes in the earth’s magnetic field caused by passing vehicles can be detected and analyzed in order to extract information on the properties of moving vehicles. In this paper, we present a prototype algorithm for real-time, high-accuracy, vehicle detection, and speed measurement, which can be implemented as a portable, low-cost, and non-invasive to existing infrastructure solution with the potential to replace existing high-cost implementations. The paper describes the algorithm and presents results from its preliminary lab testing in a close to real condition environment. Acknowledgments: Work presented in this paper was co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation (call RESEARCH–CREATE–INNOVATE) under contract no. Τ1EDK-03081 (project ODOS2020).

Keywords: magnetic sensors, vehicle detection, speed measurement, traffic surveillance system

Procedia PDF Downloads 123
789 Study on the Impact of Power Fluctuation, Hydrogen Utilization, and Fuel Cell Stack Orientation on the Performance Sensitivity of PEM Fuel Cell

Authors: Majid Ali, Xinfang Jin, Victor Eniola, Henning Hoene

Abstract:

The performance of proton exchange membrane (PEM) fuel cells is sensitive to several factors, including power fluctuations, hydrogen utilization, and the quality orientation of the fuel cell stack. In this study, we investigate the impact of these factors on the performance of a PEM fuel cell. We start by analyzing the power fluctuations that are typical in renewable energy systems and their effects on the 50 Watt fuel cell's performance. Next, we examine the hydrogen utilization rate (0-1000 mL/min) and its impact on the cell's efficiency and durability. Finally, we investigate the quality orientation (three different positions) of the fuel cell stack, which can significantly affect the cell's lifetime and overall performance. The basis of our analysis is the utilization of experimental results, which have been further validated by comparing them with simulations and manufacturer results. Our results indicate that power fluctuations can cause significant variations in the fuel cell's voltage and current, leading to a reduction in its performance. Moreover, we show that increasing the hydrogen utilization rate beyond a certain threshold can lead to a decrease in the fuel cell's efficiency. Finally, our analysis demonstrates that the orientation of the fuel cell stack can affect its performance and lifetime due to non-uniform distribution of reactants and products. In summary, our study highlights the importance of considering power fluctuations, hydrogen utilization, and quality orientation in designing and optimizing PEM fuel cell systems. The findings of this study can be useful for researchers and engineers working on the development of fuel cell systems for various applications, including transportation, stationary power generation, and portable devices.

Keywords: fuel cell, proton exchange membrane, renewable energy, power fluctuation, experimental

Procedia PDF Downloads 135
788 Maternal and Neonatal Outcomes in Women Undergoing Bariatric Surgery: A Systematic Review and Meta-Analysis

Authors: Nicolas Galazis, Nikolina Docheva, Constantinos Simillis, Kypros Nicolaides

Abstract:

Background: Obese women are at increased risk for many pregnancy complications, and bariatric surgery (BS) before pregnancy has shown to improve some of these. Objectives: To review the current literature and quantitatively assess the obstetric and neonatal outcomes in pregnant women who have undergone BS. Search Strategy: MEDLINE, EMBASE and Cochrane databases were searched using relevant keywords to identify studies that reported on pregnancy outcomes after BS. Selection Criteria: Pregnancy outcome in firstly, women after BS compared to obese or BMI-matched women with no BS and secondly, women after BS compared to the same or different women before BS. Only observational studies were included. Data Collection and Analysis: Two investigators independently collected data on study characteristics and outcome measures of interest. These were analysed using the random effects model. Heterogeneity was assessed and sensitivity analysis was performed to account for publication bias. Main Results: The entry criteria were fulfilled by 17 non-randomised cohort or case-control studies, including seven with high methodological quality scores. In the BS group, compared to controls, there was a lower incidence of preeclampsia (OR, 0.45, 95% CI, 0.25-0.80; p=0.007), GDM (OR, 0.47, 95% CI, 0.40-0.56; P<0.001) and large neonates (OR 0.46, 95% CI 0.34-0.62; p<0.001) and a higher incidence of small neonates (OR 1.93, 95% CI 1.52-2.44; p<0.001), preterm birth (OR 1.31, 95% CI 1.08-1.58; p=0.006), admission for neonatal intensive care (OR 1.33, 95% CI 1.02-1.72; p=0.03) and maternal anaemia (OR 3.41, 95% CI 1.56-7.44, p=0.002). Conclusions: BS as a whole improves some pregnancy outcomes. Laparoscopic adjustable gastric banding does not appear to increase the rate of small neonates that was seen with other BS procedures. Obese women of childbearing age undergoing BS need to be aware of these outcomes.

Keywords: bariatric surgery, pregnancy, preeclampsia, gestational diabetes, birth weight

Procedia PDF Downloads 407
787 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data

Procedia PDF Downloads 335
786 The Use of Metformin in Treatment of Polycystic Ovary Syndrome (PCOS) and Glucose Control in Pregnant Women with Gestational Diabetes Mellitus (GDM) at Tripoli Medical Center

Authors: Ebtisam A. Benomran, Abdurrauf M. Gusbi, Malak S. Elazarg, M. Sultan, Layla M. Kafu, Arwa M. Matoug, Esra E. Benamara

Abstract:

Normal pregnancy is associated with metabolic changes leading to decreased insulin sensitivity and reduced glucose tolerance, however, 3-5% of pregnant women proceed to develop gestational diabetes mellitus (GDM). Researcher studied the use of metformin in many fields and the benefit to risk balance of using metformin during pregnancy and the risk of fetotoxic. In this study we examined the use of Metformin to control Glucose in pregnant Women with gestational diabetes mellitus (GDM) and evaluate its safety use during the first trimester of pregnancy.A group of pregnant patients with gestational diabetes mellitus from the first trimester of pregnancy, non smoking with no family history of congenital malformation disease, aged between (20-45 years) and have no liver diseases and who had indicating good compliance at more than one visit over several month until delivery put on Metformin were participated in this trial. Our study shown that all the studied group of pregnant women using metformin 500 mg daily delivered a healthy babies. Meta-analysis by mother risk program showed no increase in incidence of malformations by use Metformin during the first trimester of pregnancy. A hundred outpatients were participated in the survey on the general knowledge and awareness of diabetic patients to their illness and medication used their aged between 20-40 years old. In this survey we realize that 90% of the doctors are not giving the patient full information about their illness and the use of metformin during pregnancy, also about 65% of the patients did not know about the nutritionist in the hospital and the right control diet for diabetes. Courses on first aid, rapid diagnosis of poisoning and follow the written procedures to dealing with such cases.

Keywords: gestational diabetes, malformations, metformin, pregnancy

Procedia PDF Downloads 493
785 Epilepsy Seizure Prediction by Effective Connectivity Estimation Using Granger Causality and Directed Transfer Function Analysis of Multi-Channel Electroencephalogram

Authors: Mona Hejazi, Ali Motie Nasrabadi

Abstract:

Epilepsy is a persistent neurological disorder that affects more than 50 million people worldwide. Hence, there is a necessity to introduce an efficient prediction model for making a correct diagnosis of the epileptic seizure and accurate prediction of its type. In this study we consider how the Effective Connectivity (EC) patterns obtained from intracranial Electroencephalographic (EEG) recordings reveal information about the dynamics of the epileptic brain and can be used to predict imminent seizures, as this will enable the patients (and caregivers) to take appropriate precautions. We use this definition because we believe that effective connectivity near seizures begin to change, so we can predict seizures according to this feature. Results are reported on the standard Freiburg EEG dataset which contains data from 21 patients suffering from medically intractable focal epilepsy. Six channels of EEG from each patients are considered and effective connectivity using Directed Transfer Function (DTF) and Granger Causality (GC) methods is estimated. We concentrate on effective connectivity standard deviation over time and feature changes in five brain frequency sub-bands (Alpha, Beta, Theta, Delta, and Gamma) are compared. The performance obtained for the proposed scheme in predicting seizures is: average prediction time is 50 minutes before seizure onset, the maximum sensitivity is approximate ~80% and the false positive rate is 0.33 FP/h. DTF method is more acceptable to predict epileptic seizures and generally we can observe that the greater results are in gamma and beta sub-bands. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.

Keywords: effective connectivity, Granger causality, directed transfer function, epilepsy seizure prediction, EEG

Procedia PDF Downloads 469
784 Selection of Qualitative Research Strategy for Bullying and Harassment in Sport

Authors: J. Vveinhardt, V. B. Fominiene, L. Jeseviciute-Ufartiene

Abstract:

Relevance of Research: Qualitative research is still regarded as highly subjective and not sufficiently scientific in order to achieve objective research results. However, it is agreed that a qualitative study allows revealing the hidden motives of the research participants, creating new theories, and highlighting the field of problem. There is enough research done to reveal these qualitative research aspects. However, each research area has its own specificity, and sport is unique due to the image of its participants, who are understood as strong and invincible. Therefore, a sport participant might have personal issues to recognize himself as a victim in the context of bullying and harassment. Accordingly, researcher has a dilemma in general making to speak a victim in sport. Thus, ethical aspects of qualitative research become relevant. The plenty fields of sport make a problem determining the sample size of research. Thus, the corresponding problem of this research is which and why qualitative research strategies are the most suitable revealing the phenomenon of bullying and harassment in sport. Object of research is qualitative research strategy for bullying and harassment in sport. Purpose of the research is to analyze strategies of qualitative research selecting suitable one for bullying and harassment in sport. Methods of research were scientific research analyses of qualitative research application for bullying and harassment research. Research Results: Four mane strategies are applied in the qualitative research; inductive, deductive, retroductive, and abductive. Inductive and deductive strategies are commonly used researching bullying and harassment in sport. The inductive strategy is applied as quantitative research in order to reveal and describe the prevalence of bullying and harassment in sport. The deductive strategy is used through qualitative methods in order to explain the causes of bullying and harassment and to predict the actions of the participants of bullying and harassment in sport and the possible consequences of these actions. The most commonly used qualitative method for the research of bullying and harassment in sports is semi-structured interviews in speech and in written. However, these methods may restrict the openness of the participants in the study when recording on the dictator or collecting incomplete answers when the participant in the survey responds in writing because it is not possible to refine the answers. Qualitative researches are more prevalent in terms of technology-defined research data. For example, focus group research in a closed forum allows participants freely interact with each other because of the confidentiality of the selected participants in the study. The moderator can purposefully formulate and submit problem-solving questions to the participants. Hence, the application of intelligent technology through in-depth qualitative research can help discover new and specific information on bullying and harassment in sport. Acknowledgement: This research is funded by the European Social Fund according to the activity ‘Improvement of researchers’ qualification by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712.

Keywords: bullying, focus group, harassment, narrative, sport, qualitative research

Procedia PDF Downloads 182
783 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate

Procedia PDF Downloads 263
782 A Cost-Benefit Analysis of Routinely Performed Transthoracic Echocardiography in the Setting of Acute Ischemic Stroke

Authors: John Rothrock

Abstract:

Background: The role of transthoracic echocardiography (TTE) in the diagnosis and management of patients with acute ischemic stroke remains controversial. While many stroke subspecialist reserve TTE for selected patients, others consider the procedure obligatory for most or all acute stroke patients. This study was undertaken to assess the cost vs. benefit of 'routine' TTE. Methods: We examined a consecutive series of patients who were admitted to a single institution in 2019 for acute ischemic stroke and underwent TTE. We sought to determine the frequency with which the results of TTE led to a new diagnosis of cardioembolism, redirected therapeutic cerebrovascular management, and at least potentially influenced the short or long-term clinical outcome. We recorded the direct cost associated with TTE. Results: There were 1076 patients in the study group, all of whom underwent TTE. TTE identified an unsuspected source of possible/probable cardioembolism in 62 patients (6%), confirmed an initially suspected source (primarily endocarditis) in an additional 13 (1%) and produced findings that stimulated subsequent testing diagnostic of possible/probable cardioembolism in 7 patients ( < 1%). TTE results potentially influenced the clinical outcome in a total of 48 patients (4%). With a total direct cost of $1.51 million, the mean cost per case wherein TTE results potentially influenced the clinical outcome in a positive manner was $31,375. Diagnostically and therapeutically, TTE was most beneficial in 67 patients under the age of 55 who presented with 'cryptogenic' stroke, identifying patent foramen ovale in 21 (31%); closure was performed in 19. Conclusions: The utility of TTE in the setting of acute ischemic stroke is modest, with its yield greatest in younger patients with cryptogenic stroke. Given the greater sensitivity of transesophageal echocardiography in detecting PFO and evaluating the aortic arch, TTE’s role in stroke diagnosis would appear to be limited.

Keywords: cardioembolic, cost-benefit, stroke, TTE

Procedia PDF Downloads 129
781 First Year Experience of International Students in Malaysian Universities

Authors: Nur Hidayah Iwani Mohd Kamal

Abstract:

The higher education institutions in Malaysia is challenged with a more socially and culturally diverse student population than ever before, especially with the increasing number of international students studying in Malaysia in the recent years. First year university is a critical time in students’ lives. Students are not only developing intelectually, they are also establishing and maintaining personal relationships, developing an identity, deciding about career and lifestyle, maintaining personal health and wellness, and developing an integrated philosohy of life. The higher education institutions work as a diverse community of learners to provide a supportive environment for these first year students in assisting them in their transition from high school to university. Although many universities are taking steps to improve the first year experience for their new local and international students, efforts must be taken to ensure organized and coordinated manner in order for the initiatives to be successful. The objectives of the study are to examine the international students’ perceptions and interpretation of their first year experiences in shaping and determining their attitudes toward study and the quality of their entire undergraduate academic career; and identify an appropriate mechanism to encounter the international students’ adjustment in the new environment in order to facilitate cross-functional communication and create a coherent and meaningful first year experience. A key construct in this study is that if universities wish to recruiting and retaining international students, it is their ethical responsibility to determine how they can best meet their needs at the academic and social level, create a supportive ‘learning community’ as a foundation of their educational experience, hence facilitate cross-cultural communication and create a coherent and meaningful first year experience. This study is simultaneously frames in relation to focus on the factors that influence a successful and satisfying transition to university life by the first year international students. The study employs a mixed-method data collection involving semi-structured interviews, questionnaire, classroom observation and document analysis. This study provides valuable insight into the struggles that many international students face as they attempt to make the adjustment not only to a new educational system but factors such as psychosocial and cultural problems. It would discuss some of the factors that impact the international students during their first year in university in their quest to be academically successful. It concludes with some recommendations on how Malaysian universities provide these students with a good first year experience based on some the best practices of universities around the world.

Keywords: first year experience, Malaysian universities, international students, education

Procedia PDF Downloads 288
780 Vulnerability of People to Climate Change: Influence of Methods and Computation Approaches on Assessment Outcomes

Authors: Adandé Belarmain Fandohan

Abstract:

Climate change has become a major concern globally, particularly in rural communities that have to find rapid coping solutions. Several vulnerability assessment approaches have been developed in the last decades. This comes along with a higher risk for different methods to result in different conclusions, thereby making comparisons difficult and decision-making non-consistent across areas. The effect of methods and computational approaches on estimates of people’s vulnerability was assessed using data collected from the Gambia. Twenty-four indicators reflecting vulnerability components: (exposure, sensitivity, and adaptive capacity) were selected for this purpose. Data were collected through household surveys and key informant interviews. One hundred and fifteen respondents were surveyed across six communities and two administrative districts. Results were compared over three computational approaches: the maximum value transformation normalization, the z-score transformation normalization, and simple averaging. Regardless of the approaches used, communities that have high exposure to climate change and extreme events were the most vulnerable. Furthermore, the vulnerability was strongly related to the socio-economic characteristics of farmers. The survey evidenced variability in vulnerability among communities and administrative districts. Comparing output across approaches, overall, people in the study area were found to be highly vulnerable using the simple average and maximum value transformation, whereas they were only moderately vulnerable using the z-score transformation approach. It is suggested that assessment approach-induced discrepancies be accounted for in international debates to harmonize/standardize assessment approaches to the end of making outputs comparable across regions. This will also likely increase the relevance of decision-making for adaptation policies.

Keywords: maximum value transformation, simple averaging, vulnerability assessment, West Africa, z-score transformation

Procedia PDF Downloads 105
779 Time-dependent Association between Recreational Cannabinoid Use and Memory Performance in Healthy Adults: A Neuroimaging Study of Human Connectome Project

Authors: Kamyar Moradi

Abstract:

Background: There is mixed evidence regarding the association between recreational cannabinoid use and memory performance. One of the major reasons for the present controversy is different cannabinoid use-related covariates that influence the cognitive status of an individual. Adjustment of these confounding variables provides accurate insight into the real effects of cannabinoid use on memory status. In this study, we sought to investigate the association between recent recreational cannabinoid use and memory performance while correcting the model for other possible covariates such as demographic characteristics and duration, and amount of cannabinoid use. Methods: Cannabinoid users were assigned to two groups based on the results of THC urine drug screen test (THC+ group: n = 110, THC- group: n = 410). THC urine drug screen test has a high sensitivity and specificity in detecting cannabinoid use in the last 3-4 weeks. The memory domain of NIH Toolbox battery and brain MRI volumetric measures were compared between the groups while adjusting for confounding variables. Results: After Benjamini-Hochberg p-value correction, the performance in all of the measured memory outcomes, including vocabulary comprehension, episodic memory, executive function/cognitive flexibility, processing speed, reading skill, working memory, and fluid cognition, were significantly weaker in THC+ group (p values less than 0.05). Also, volume of gray matter, left supramarginal, right precuneus, right inferior/middle temporal, right hippocampus, left entorhinal, and right pars orbitalis regions were significantly smaller in THC+ group. Conclusions: this study provides evidence regarding the acute effect of recreational cannabis use on memory performance. Further studies are warranted to confirm the results.

Keywords: brain MRI, cannabis, memory, recreational use, THC urine test

Procedia PDF Downloads 198
778 The Human Rights Code: Fundamental Rights as the Basis of Human-Robot Coexistence

Authors: Gergely G. Karacsony

Abstract:

Fundamental rights are the result of thousand years’ progress of legislation, adjudication and legal practice. They serve as the framework of peaceful cohabitation of people, protecting the individual from any abuse by the government or violation by other people. Artificial intelligence, however, is the development of the very recent past, being one of the most important prospects to the future. Artificial intelligence is now capable of communicating and performing actions the same way as humans; such acts are sometimes impossible to tell from actions performed by flesh-and-blood people. In a world, where human-robot interactions are more and more common, a new framework of peaceful cohabitation is to be found. Artificial intelligence, being able to take part in almost any kind of interaction where personal presence is not necessary without being recognized as a non-human actor, is now able to break the law, violate people’s rights, and disturb social peace in many other ways. Therefore, a code of peaceful coexistence is to be found or created. We should consider the issue, whether human rights can serve as the code of ethical and rightful conduct in the new era of artificial intelligence and human coexistence. In this paper, we will examine the applicability of fundamental rights to human-robot interactions as well as to the actions of artificial intelligence performed without human interaction whatsoever. Robot ethics has been a topic of discussion and debate of philosophy, ethics, computing, legal sciences and science fiction writing long before the first functional artificial intelligence has been introduced. Legal science and legislation have approached artificial intelligence from different angles, regulating different areas (e.g. data protection, telecommunications, copyright issues), but they are only chipping away at the mountain of legal issues concerning robotics. For a widely acceptable and permanent solution, a more general set of rules would be preferred to the detailed regulation of specific issues. We argue that human rights as recognized worldwide are able to be adapted to serve as a guideline and a common basis of coexistence of robots and humans. This solution has many virtues: people don’t need to adjust to a completely unknown set of standards, the system has proved itself to withstand the trials of time, legislation is easier, and the actions of non-human entities are more easily adjudicated within their own framework. In this paper we will examine the system of fundamental rights (as defined in the most widely accepted source, the 1966 UN Convention on Human Rights), and try to adapt each individual right to the actions of artificial intelligence actors; in each case we will examine the possible effects on the legal system and the society of such an approach, finally we also examine its effect on the IT industry.

Keywords: human rights, robot ethics, artificial intelligence and law, human-robot interaction

Procedia PDF Downloads 244
777 Left Atrial Appendage Occlusion vs Oral Anticoagulants in Atrial Fibrillation and Coronary Stenting. The DESAFIO Registry

Authors: José Ramón López-Mínguez, Estrella Suárez-Corchuelo, Sergio López-Tejero, Luis Nombela-Franco, Xavier Freixa-Rofastes, Guillermo Bastos-Fernández, Xavier Millán-Álvarez, Raúl Moreno-Gómez, José Antonio Fernández-Díaz, Ignacio Amat-Santos, Tomás Benito-González, Fernando Alfonso-Manterola, Pablo Salinas-Sanguino, Pedro Cepas-Guillén, Dabit Arzamendi, Ignacio Cruz-González, Juan Manuel Nogales-Asensio

Abstract:

Background and objectives: The treatment of patients with non-valvular atrial fibrillation (NVAF) who need coronary stenting is challenging. The objective of the study was to determine whether left atrial appendage occlusion (LAAO) could be a feasible option and benefit these patients. To this end, we studied the impact of LAAO plus antiplatelet drugs vs oral anticoagulants (OAC) (including direct OAC) plus antiplatelet drugs in these patients’ long-term outcomes. Methods: The results of 207 consecutive patients with NVAF who underwent coronary stenting were analyzed. A total of 146 patients were treated with OAC (75 with acenocoumarol, 71 with direct OAC) while 61 underwent LAAO. The median follow-up was 35 months. Patients also received antiplatelet therapy as prescribed by their cardiologist. The study received the proper ethical oversight. Results: Age (mean 75.7 years), and the past medical history of stroke were similar in both groups. However, the LAAO group had more unfavorable characteristics (history of coronary artery disease [CHA2DS2-VASc], and significant bleeding [BARC ≥ 2] and HAS-BLED). The occurrence of major adverse events (death, stroke/transient ischemic events, major bleeding) and major cardiovascular events (cardiac death, stroke/transient ischemic attack, and myocardial infarction) were significantly higher in the OAC group compared to the LAAO group: 19.75% vs 9.06% (HR, 2.18; P = .008) and 6.37% vs 1.91% (HR, 3.34; P = .037), respectively. Conclusions: In patients with NVAF undergoing coronary stenting, LAAO plus antiplatelet therapy produced better long-term outcomes compared to treatment with OAC plus antiplatelet therapy despite the unfavorable baseline characteristics of the LAAO group.

Keywords: stents, atrial fibrillation, anticoagulants, left atrial appendage occlusion

Procedia PDF Downloads 70
776 Prediction of Super-Response to Cardiac Resynchronisation Therapy

Authors: Vadim A. Kuznetsov, Anna M. Soldatova, Tatyana N. Enina, Elena A. Gorbatenko, Dmitrii V. Krinochkin

Abstract:

The aim of the study was to evaluate potential parameters related with super-response to CRT. Methods: 60 CRT patients (mean age 54.3 ± 9.8 years; 80% men) with congestive heart failure (CHF) II-IV NYHA functional class, left ventricular ejection fraction < 35% were enrolled. At baseline, 1 month, 3 months and each 6 months after implantation clinical, electrocardiographic and echocardiographic parameters, NT-proBNP level were evaluated. According to the best decrease of left ventricular end-systolic volume (LVESV) (mean follow-up period 33.7 ± 15.1 months) patients were classified as super-responders (SR) (n=28; reduction in LVESV ≥ 30%) and non-SR (n=32; reduction in LVESV < 30%). Results: At baseline groups differed in age (58.1 ± 5.8 years in SR vs 50.8 ± 11.4 years in non-SR; p=0.003), gender (female gender 32.1% vs 9.4% respectively; p=0.028), width of QRS complex (157.6 ± 40.6 ms in SR vs 137.6 ± 33.9 ms in non-SR; p=0.044). Percentage of LBBB was equal between groups (75% in SR vs 59.4% in non-SR; p=0.274). All parameters of mechanical dyssynchrony were higher in SR, but only difference in left ventricular pre-ejection period (LVPEP) was statistically significant (153.0 ± 35.9 ms vs. 129.3 ± 28.7 ms p=0.032). NT-proBNP level was lower in SR (1581 ± 1369 pg/ml vs 3024 ± 2431 pg/ml; p=0.006). The survival rates were 100% in SR and 90.6% in non-SR (log-rank test P=0.002). Multiple logistic regression analysis showed that LVPEP (HR 1.024; 95% CI 1.004–1.044; P = 0.017), baseline NT-proBNP level (HR 0.628; 95% CI 0.414–0.953; P=0.029) and age at baseline (HR 1.094; 95% CI 1.009-1.168; P=0.30) were independent predictors for CRT super-response. ROC curve analysis demonstrated sensitivity 71.9% and specificity 82.1% (AUC=0.827; p < 0.001) of this model in prediction of super-response to CRT. Conclusion: Super-response to CRT is associated with better survival in long-term period. Presence of LBBB was not associated with super-response. LVPEP, NT-proBNP level, and age at baseline can be used as independent predictors of CRT super-response.

Keywords: cardiac resynchronisation therapy, superresponse, congestive heart failure, left bundle branch block

Procedia PDF Downloads 400
775 Admission C-Reactive Protein Serum Levels and In-Hospital Mortality in the Elderly Admitted to the Acute Geriatrics Department

Authors: Anjelika Kremer, Irina Nachimov, Dan Justo

Abstract:

Background: C-reactive protein (CRP) serum levels are commonly measured in hospitalized patients. Elevated admission CRP serum levels and in-hospital mortality has been seldom studied in the general population of elderly patients admitted to the acute Geriatrics department. Methods: A retrospective cross-sectional study was conducted at a tertiary medical center. Included were all elderly patients (age 65 years or more) admitted to a single acute Geriatrics department from the emergency room between April 2014 and January 2015. CRP serum levels were measured routinely in all patients upon the first 24 hours of admission. A logistic regression analysis was used to study if admission CRP serum levels were associated with in-hospital mortality independent of age, gender, functional status, and co-morbidities. Results: Overall, 498 elderly patients were included in the analysis: 306 (61.4%) female patients and 192 (38.6%) male patients. The mean age was 84.8±7.0 years (median: 85 years; IQR: 80-90 years). The mean admission CRP serum levels was 43.2±67.1 mg/l (median: 13.1 mg/l; IQR: 2.8-51.7 mg/l). Overall, 33 (6.6%) elderly patients died during the hospitalization. A logistic regression analysis showed that in-hospital mortality was independently associated with history of stroke (p < 0.0001), heart failure (p < 0.0001), and admission CRP serum levels (p < 0.0001) – and to a lesser extent with age (p = 0.042), collagen vascular disease (p=0.011), and recent venous thromboembolism (p=0.037). Receiver operating characteristic (ROC) curve showed that admission CRP serum levels predict in-hospital mortality fairly with an area under the curve (AUC) of 0.694 (p < 0.0001). Cut-off value with maximal sensitivity and specificity was 19.7 mg/L. Conclusions: Admission CRP serum levels may be used to predict in-hospital mortality in the general population of elderly patients admitted to the acute Geriatrics department.

Keywords: c-reactive protein, elderly, mortality, prediction

Procedia PDF Downloads 239
774 The Clinical Effectiveness of Off-The-Shelf Foot Orthoses on the Dynamics of Gait in Patients with Early Rheumatoid Arthritis

Authors: Vicki Cameron

Abstract:

Background: Rheumatoid Arthritis (RA) typically effects the feet and about 20% of patients present initially with foot and ankle symptoms. Custom moulded foot orthoses (FO) in the management of foot and ankle problems in RA is well documented in the literature. Off-the-shelf FO are thought to provide an effective alternative to custom moulded FO in patients with RA, however they are not evidence based. Objectives: To determine the effects of off-the-shelf FO on; 1. quality of life (QOL) 2. walking speed 4. peak plantar pressure in the forefoot (PPPft) Methods: Thirty-five patients (six male and 29 female) participated in the study from 11/2006 to 07/2008. The age of the patients ranged from 26 to 80 years (mean 52.4 years; standard deviation [SD] 13.3 years). A repeated measures design was used, with patients presenting at baseline, three months and six months. Patients were tested walking barefoot, shod and shod with FO. The type of orthoses used was the Slimflex Plastic ® (Algeos). The Leeds Foot Impact Scale (LFIS) was used to investigate QOL. The Vicon 612 motion analysis system was used to determine the effect of FO on walking speed. The F-scan walkway and in-shoe systems provided information of the effect on PPPft. Ethical approval was obtained on 07/2006. Data was analysed using SPSS version 15.0. Results/Discussion: The LFIS data was analysed with a repeated measures ANOVA. There was a significant improvement in the LFIS score with the use of the FO over the six months (p<0.01). A significant increase in walking speed with the orthoses was observed (p<0.01). Peak plantar pressure in the forefoot was reduced with the FO, as shown by a non-parametric Friedman’s test (chi-square = 55.314, df=2, p<0.05). Conclusion: The results show that off-the-shelf FO are effective in managing foot problems in patients with RA. Patients reported an improved QOL with the orthoses, and further objective measurements were quantified to provide a rationale for this change. Patients demonstrated an increased walking speed, which has been shown to be associated with reduced pain. The FO decreased PPPft which have been reported as a site of pain and ulceration in patients with RA. Salient Clinical Points: Off-the-shelf FO offer an effective alternative to custom moulded FO, and can be dispensed at the chair side. This is crucial in the management of foot problems associated with RA as early intervention is advocated due to the chronic and progressive nature of the disease.

Keywords: podiatry, rheumatoid arthritis, foot orthoses, gait analysis

Procedia PDF Downloads 260
773 Effect of Education Based-on the Health Belief Model on Preventive Behaviors of Exposure to ‎Secondhand Smoke among Women

Authors: Arezoo Fallahi

Abstract:

Introduction: Exposure to second-hand smoke is an important global health problem and threatens the health of people, especially children and women. The aim of this study was to determine the effect of education based on the Health Belief Model on preventive behaviors of exposure to second-hand smoke in women. Materials and Methods: This experimental study was performed in 2022 in Sanandaj, west of Iran. Seventy-four people were selected by simple random sampling and divided into an intervention group (37 people) and a control group (37 people). Data collection tools included demographic characteristics and a second-hand smoke exposure questionnaire based on the Health Beliefs Model. The training in the intervention group was conducted in three one-hour sessions in the comprehensive health service centers in the form of lectures, pamphlets, and group discussions. Data were analyzed using SPSS software version 21 and statistical tests such as correlation, paired t-test, and independent t-test. Results: The intervention and control groups were homogeneous before education. They were similar in terms of mean scores of the Health Belief Model. However, after an educational intervention, some of the scores increased, including the mean perceived sensitivity score (from 17.62±2.86 to 19.75±1.23), perceived severity score (28.40±4.45 to 31.64±2), perceived benefits score (27.27±4.89 to 31.94±2.17), practice score (32.64±4.68 to 36.91±2.32) perceived barriers from 26.62±5.16 to 31.29±3.34, guide for external action (from 17.70±3.99 to 22/89 ±1.67), guide for internal action from (16.59±2.95 to 1.03±18.75), and self-efficacy (from 19.83 ±3.99 to 23.37±1.43) (P <0.05). Conclusion: The educational intervention designed based on the Health Belief Model in women was effective in performing preventive behaviors against exposure to second-hand smoke.

Keywords: education, women, exposure to secondhand smoke, health belief model

Procedia PDF Downloads 73
772 Calpain-Mediated, Cisplain-Induced Apoptosis in Breast Cancer Cells

Authors: Shadia Al-Bahlani, Khadija Al-Bulushi, Zuweina Al-Hadidi, Buthaina Al-Dhahl, Nadia Al-Abri

Abstract:

Breast cancer is the most common cancer in women worldwide. Triple-Negative Breast Cancer (TNBC) is an aggressive type of breast cancer, which is defined by the absence of Estrogen (ER), Progesterone (PR) and human epidermal growth factor (Her-2) receptors. The calpain system plays an important role in many cellular processes including apoptosis, necrosis, cell signaling and proliferation. However, the role of calpain in cisplatin (CDDP)-induced apoptosis in TNBC cells is not fully understood. Here, TNBC (MDA-MB231) cells were treated with different concentration of CDDP (0, 20 & 40 µM) and calpain activation and apoptosis were measured by western blot and Hoechst Stain respectively. In addition, calpain modulation by either activation and/or inhibition and its effect on CDDP-induced apoptosis were assessed by the same above approaches. Our findings showed that CDDP induced endoplasmic reticulum stress and thus Calcium release and subsequently activate calpain α-fodrin cleavage indicated by the increase in GRP78 and Calmodulin protein expression and respectively in MDA-MB231 cells. It also induced apoptosis as measured by Hoechst stain and caspase-12 cleavage. Calpain activation by both Cyclopiazonic acid and Thapsigargin showed similar effect and enhanced the sensitivity of these cells to CDDP treatment. On the other hand, calpain inhibition by either specific siRNA and/or exogenous inhibitor (Calpeptin) had an adverse effect where it attenuated calpain activation and thus CDDP- induced apoptosis in these cells. Altogether, these findings suggested that calpain activation play an essential role in sensitizing the TNBC cells to CDDP-induced apoptosis. This might lead to the discovery of novel treatment to over this aggressive type of breast cancer.

Keywords: calpain, cisplatin, apoptosis, breast cancer

Procedia PDF Downloads 345
771 Computer-Aided Diagnosis System Based on Multiple Quantitative Magnetic Resonance Imaging Features in the Classification of Brain Tumor

Authors: Chih Jou Hsiao, Chung Ming Lo, Li Chun Hsieh

Abstract:

Brain tumor is not the cancer having high incidence rate, but its high mortality rate and poor prognosis still make it as a big concern. On clinical examination, the grading of brain tumors depends on pathological features. However, there are some weak points of histopathological analysis which can cause misgrading. For example, the interpretations can be various without a well-known definition. Furthermore, the heterogeneity of malignant tumors is a challenge to extract meaningful tissues under surgical biopsy. With the development of magnetic resonance imaging (MRI), tumor grading can be accomplished by a noninvasive procedure. To improve the diagnostic accuracy further, this study proposed a computer-aided diagnosis (CAD) system based on MRI features to provide suggestions of tumor grading. Gliomas are the most common type of malignant brain tumors (about 70%). This study collected 34 glioblastomas (GBMs) and 73 lower-grade gliomas (LGGs) from The Cancer Imaging Archive. After defining the region-of-interests in MRI images, multiple quantitative morphological features such as region perimeter, region area, compactness, the mean and standard deviation of the normalized radial length, and moment features were extracted from the tumors for classification. As results, two of five morphological features and three of four image moment features achieved p values of <0.001, and the remaining moment feature had p value <0.05. Performance of the CAD system using the combination of all features achieved the accuracy of 83.18% in classifying the gliomas into LGG and GBM. The sensitivity is 70.59% and the specificity is 89.04%. The proposed system can become a second viewer on clinical examinations for radiologists.

Keywords: brain tumor, computer-aided diagnosis, gliomas, magnetic resonance imaging

Procedia PDF Downloads 263
770 Investigation and Analysis of Residential Building Energy End-Use Profile in Hot and Humid Area with Reference to Zhuhai City in China

Authors: Qingqing Feng, S. Thomas Ng, Frank Xu

Abstract:

Energy consumption in domestic sector has been increasing rapidly in China all along these years. Confronted with environmental challenges, the international society has made a concerted effort by setting the Paris Agreement, the Sustainable Development Goals, and the New Urban Agenda. Thus it’s very important for China to put forward reasonable countermeasures to boost building energy conservation which necessitates looking into the actuality of residential energy end-use profile and its influence factors. In this study, questionnaire surveys have been conducted in Zhuhai city in China, a typical city in hot summer warm winter climate zone. The data solicited mainly include the occupancy schedule, building’s information, residents’ information, household energy uses, the type, quantity and use patterns of appliances and occupants’ satisfaction. Over 200 valid samples have been collected through face-to-face interviews. Descriptive analysis, clustering analysis, correlation analysis and sensitivity analysis were then conducted on the dataset to understand the energy end-use profile. The findings identify: 1) several typical clusters of occupancy patterns and appliances utilization patterns; 2) the top three sensitive factors influencing energy consumption; 3) the correlations between satisfaction and energy consumption. For China with many different climates zones, it’s difficult to find a silver bullet on energy conservation. The aim of this paper is to provide a theoretical basis for multi-stakeholders including policy makers, residents, and academic communities to formulate reasonable energy saving blueprints for hot and humid urban residential buildings in China.

Keywords: residential building, energy end-use profile, questionnaire survey, sustainability

Procedia PDF Downloads 131
769 Alcohol and Soda Consumption of University Students in Manila

Authors: Alexi Colleen F. Lim, Inna Felicia I. Agoncillo, Quenniejoy T. Dizon, Jennifer Joyce T. Eti, Carlota Aileen H. Monares, Neil Roy B. Rosales, Joshua F. Santillan, Alyssa Francesca D. S. Tanchuling, Josefina A. Tuazon, Mary Joan Therese C. Valera-Kourdache

Abstract:

Majority of leading causes of mortality in the Philippines are NCDs, which are preventable through control of known risk factors such as smoking, obesity, physical inactivity, and alcohol. Sugar-sweetened beverages such as soda and energy drinks also contribute to NCD risk and are of concern particularly for youth. This study provides baseline data on beverage consumption of university students in Manila with the focus on alcohol and soda. It further aims to identify factors affecting consumption. Specific objectives include: (1) to describe beverage consumption practices of university students in Manila; and (2) to determine factors promoting excessive consumption of alcohol and soda including demographic characteristics, attitude, interpersonal and environmental variables. Methods: The study employed correlational design with randomly selected students from two universities in Manila. Students 18 years or older who agreed to participate were included after obtaining ethical clearance. The study had two instruments: (1) World Health Organization’s Alcohol Use Disorders Identification Test (AUDIT) was used with permission, to determine excessive alcohol consumption; and (2) a questionnaire to obtain information regarding soda and energy drink consumption. Results: Out of 400 students surveyed, 70% were female and 78.75% were 18-20 years old (mean=19.79; SD=3.76). Among them, 51.50% consumed alcohol, with 30.10% excessive drinkers. Soda consumption is 91.50% with 37.70% excessive consumers. For energy drinks, 36.75% consume this and only 4.76% drink excessively. Using logistic regression, students who were more likely to be excessive alcohol drinkers belonged to non-health courses (OR=2.21) and purchased alcohol from bars (OR=7.84). Less likely to drink excessively are students who do not drink due to stress (OR=0.05) and drink when it is accessible (OR=0.02). Excessive soda consumption was less likely for female students (OR=0.28), those who drink when it is accessible (OR=0.14), do not drink soda during stressful situations (OR=0.19), and do not use soda as hangover treatment (OR=0.15). Conclusion: Excessive alcohol consumption was greater among students in Manila (30.10%) than in US (20%). Drinking alcohol with friends was not related to excessive consumption but availability in bars was. It is expected that health sciences students are less likely to engage in excessive alcohol as they are more aware of its ill effects. Prevalence of soda consumption in Manila (91.50%) is markedly higher compared to 24.5% in the US. These findings can inform schools in developing appropriate health education interventions and policies. For greater understanding of these behaviors and factors, further studies are recommended to explore knowledge and other factors that may promote excessive consumption.

Keywords: alcohol consumption, beverage consumption, energy drinks consumption, soda consumption, university students

Procedia PDF Downloads 281
768 Digitalization and High Audit Fees: An Empirical Study Applied to US Firms

Authors: Arpine Maghakyan

Abstract:

The purpose of this paper is to study the relationship between the level of industry digitalization and audit fees, especially, the relationship between Big 4 auditor fees and industry digitalization level. On the one hand, automation of business processes decreases internal control weakness and manual mistakes; increases work effectiveness and integrations. On the other hand, it may cause serious misstatements, high business risks or even bankruptcy, typically in early stages of automation. Incomplete automation can bring high audit risk especially if the auditor does not fully understand client’s business automation model. Higher audit risk consequently will cause higher audit fees. Higher audit fees for clients with high automation level are more highlighted in Big 4 auditor’s behavior. Using data of US firms from 2005-2015, we found that industry level digitalization is an interaction for the auditor quality on audit fees. Moreover, the choice of Big4 or non-Big4 is correlated with client’s industry digitalization level. Big4 client, which has higher digitalization level, pays more than one with low digitalization level. In addition, a high-digitalized firm that has Big 4 auditor pays higher audit fee than non-Big 4 client. We use audit fees and firm-specific variables from Audit Analytics and Compustat databases. We analyze collected data by using fixed effects regression methods and Wald tests for sensitivity check. We use fixed effects regression models for firms for determination of the connections between technology use in business and audit fees. We control for firm size, complexity, inherent risk, profitability and auditor quality. We chose fixed effects model as it makes possible to control for variables that have not or cannot be measured.

Keywords: audit fees, auditor quality, digitalization, Big4

Procedia PDF Downloads 302
767 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks

Authors: Sulemana Ibrahim

Abstract:

Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.

Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks

Procedia PDF Downloads 63
766 Seepage Analysis through Earth Dam Embankment: Case Study of Batu Dam

Authors: Larifah Mohd Sidik, Anuar Kasa

Abstract:

In recent years, the demands for raw water are increasing along with the growth of the economy and population. Hence, the need for the construction and operation of dams is one of the solutions for the management of water resources problems. The stability of the embankment should be taken into consideration to evaluate the safety of retaining water. The safety of the dam is mostly based on numerous measurable components, for instance, seepage flowrate, pore water pressure and deformation of the embankment. Seepage and slope stability is the primary and most important reason to ascertain the overall safety behavior of the dams. This research study was conducted to evaluate static condition seepage and slope stability performances of Batu dam which is located in Kuala Lumpur capital city. The numerical solution Geostudio-2012 software was employed to analyse the seepage using finite element method, SEEP/W and slope stability using limit equilibrium method, SLOPE/W for three different cases of reservoir level operations; normal and flooded condition. Results of seepage analysis using SEEP/W were utilized as parental input for the analysis of SLOPE/W. Sensitivity analysis on hydraulic conductivity of material was done and calibrated to minimize the relative error of simulation SEEP/W, where the comparison observed field data and predicted value were also carried out. In seepage analysis, such as leakage flow rate, pore water distribution and location of a phreatic line are determined using the SEEP/W. The result of seepage analysis shows the clay core effectively lowered the phreatic surface and no piping failure is shown in the result. Hence, the total seepage flux was acceptable and within the permissible limit.

Keywords: earth dam, dam safety, seepage, slope stability, pore water pressure

Procedia PDF Downloads 222
765 Thermolysin Entrapment in a Gold Nanoparticles/Polymer Composite: Construction of an Efficient Biosensor for Ochratoxin a Detection

Authors: Fatma Dridi, Mouna Marrakchi, Mohammed Gargouri, Alvaro Garcia Cruz, Sergei V. Dzyadevych, Francis Vocanson, Joëlle Saulnier, Nicole Jaffrezic-Renault, Florence Lagarde

Abstract:

An original method has been successfully developed for the immobilization of thermolysin onto gold interdigitated electrodes for the detection of ochratoxin A (OTA) in olive oil samples. A mix of polyvinyl alcohol (PVA), polyethylenimine (PEI) and gold nanoparticles (AuNPs) was used. Cross-linking sensors chip was made by using a saturated glutaraldehyde (GA) vapor atmosphere in order to render the two polymers water stable. Performance of AuNPs/ (PVA/PEI) modified electrode was compared to a traditional immobilized enzymatic method using bovine serum albumin (BSA). Atomic force microscopy (AFM) experiments were employed to provide a useful insight into the structure and morphology of the immobilized thermolysin composite membranes. The enzyme immobilization method influence the topography and the texture of the deposited layer. Biosensors optimization and analytical characteristics properties were studied. Under optimal conditions AuNPs/ (PVA/PEI) modified electrode showed a higher increment in sensitivity. A 700 enhancement factor could be achieved with a detection limit of 1 nM. The newly designed OTA biosensors showed a long-term stability and good reproducibility. The relevance of the method was evaluated using commercial doped olive oil samples. No pretreatment of the sample was needed for testing and no matrix effect was observed. Recovery values were close to 100% demonstrating the suitability of the proposed method for OTA screening in olive oil.

Keywords: thermolysin, A. ochratoxin , polyvinyl alcohol, polyethylenimine, gold nanoparticles, olive oil

Procedia PDF Downloads 591
764 The Impact of Coronal STIR Imaging in Routine Lumbar MRI: Uncovering Hidden Causes to Enhanced Diagnostic Yield of Back Pain and Sciatica

Authors: Maysoon Nasser Samhan, Somaya Alkiswani, Abdullah Alzibdeh

Abstract:

Background: Routine lumbar MRIs for back pain may yield normal results despite persistent symptoms, which means the possibility of other causes for this pain, which was not shown on the routine images. Research suggests including coronal STIR imaging to detect additional pathologies like sacroiliitis. Objectives: This study aims to enhance diagnostic accuracy and aid in determining treatment processes for patients with persistent back pain who have normal routine lumbar MRI (T1 and T2 images) by incorporating coronal STIR into the examination. Methods: A prospectively conducted study involving 274 patients, 115 males and 159 females, with an age range of 6–92 years, reviewed their medical records and imaging data following a lumbar spine MRI. This study included patients with back pain and sciatica as their primary complaints, all of whom underwent lumbar spine MRIs at our hospital to identify potential pathologies. Using a GE Signa HD 1.5T MRI System, each patient received a standard MRI protocol that included T1 and T2 sagittal and axial sequences, as well as a coronal STIR sequence. We collected relevant MRI findings, including abnormalities and structural variations, from radiology reports. We classified these findings into tables and documented them as counts and percentages, using Fisher’s exact test to assess differences between categorical variables. We conducted a statistical analysis using Prism GraphPad software version 10.1.2. The study adhered to ethical guidelines, institutional review board approvals, and patient confidentiality regulations. Results: Exclusion of the coronal STIR sequence led to 83 subjects (30.29%) being classified as within normal limits on MRI examination. 36 patients without abnormalities on T1 and T2 sequences showed abnormalities on the coronal STIR sequence, with 26 cases attributed to spinal pathologies and 10 to non-spinal pathologies. In addition to that, Fisher's exact test demonstrated a significant association between sacroiliitis diagnosis and abnormalities identified solely through the coronal STIR sequence (P < 0.0001). Conclusion: Implementing coronal STIR imaging as part of routine lumbar MRI protocols has the potential to improve patient care by facilitating a more comprehensive evaluation and management of persistent back pain.

Keywords: magnetic resonance imaging, lumber MRI, radiology, neurology

Procedia PDF Downloads 16
763 Comparison the Effectiveness of Pain Cognitive- Behavioral Therapy and Its Computerized Version on Reduction of Pain Intensity, Depression, Anger and Anxiety in Children with Cancer: A Randomized Controlled Trial

Authors: Najmeh Hamid, Vajiheh Hamedy , Zahra Rostamianasl

Abstract:

Background: Cancer is one of the medical problems that have been associated with pain. Moreover, the pain is combined with negative emotions such as anxiety, depression and anger. Poor pain management causes negative effects on the quality of life, which results in negative effects that continue a long time after the painful experiences. Objectives: The aim of this research was to compare the effectiveness of Common Cognitive Behavioral Therapy for Pain and its computerized version on the reduction of pain intensity, depression, anger and anxiety in children with cancer. Methods: The research method of this “Randomized Controlled Clinical Trial” was a pre, post-test and follow-up with a control group. In this research, we have examined the effectiveness of Common Cognitive Behavioral Therapy for Pain and its computerized version on the reduction of pain intensity, anxiety, depression and anger in children with cancer in Ahvaz. Two psychological interventions (cognitive behavioral therapy for pain and the computerized version) were compared with the control group. The sample consisted of 60 children aged 8 to 12 years old with different types of cancer at Shafa hospital in Ahwaz. According to the including and excluding criteria such as age, socioeconomic status, clinical diagnostic interview and other criteria, 60 subjects were selected. Then, randomly, 45 subjects were selected. The subjects were randomly divided into three groups of 15 (two experimental and one control group). The research instruments included Spielberger Anxiety Inventory (STAY-2) and International Pain Measurement Scale. The first experimental group received 6 sessions of cognitive-behavioral therapy for 6 weeks, and the second group was subjected to a computerized version of cognitive-behavioral therapy for 6 weeks, but the control group did not receive any interventions. For ethical considerations, a version of computerized cognitive-behavioral therapy was provided to them. After 6 weeks, all three groups were evaluated as post-test and eventually after a one-month follow-up. Results: The findings of this study indicated that both interventions could reduce the negative emotions (pain, anger, anxiety, depression) associated with cancer in children in comparison with a control group (p<0.0001). In addition, there were no significant differences between the two interventions (p<0.01). It means both interventions are useful for reducing the negative effects of pain and enhancing adjustment. Conclusion: we can use CBT in situations in which there is no access to psychologists and psychological services. In addition, it can be a useful alternative to conventional psychological interventions.

Keywords: pain, children, psychological intervention, cancer, anger, anxiety, depression

Procedia PDF Downloads 80