Search results for: backtracking search algorithm
331 A Study of Secondary Particle Production from Carbon Ion Beam for Radiotherapy
Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane
Abstract:
Achieving precise radiotherapy through carbon therapy necessitates the accurate monitoring of radiation dose distribution within the patient's body. This process is pivotal for targeted tumor treatment, minimizing harm to healthy tissues, and enhancing overall treatment effectiveness while reducing the risk of side effects. In our investigation, we adopted a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo (MC) simulations. Initially, Geant4 simulations were employed to extract the initial positions of secondary particles generated during interactions between carbon ions and water, including protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we explored the relationship between the carbon ion beam and these secondary particles. Interaction vertex imaging (IVI) proves valuable for monitoring dose distribution during carbon therapy, providing information about secondary particle locations and abundances, particularly protons. The IVI method relies on charged particles produced during ion fragmentation to gather range information by reconstructing particle trajectories back to their point of origin, known as the vertex. In the context of carbon ion therapy, our simulation results indicated a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the unique elongated geometry of the target, hindering the straightforward transmission of forward-generated protons. Consequently, the limited protons that did emerge predominantly originated from points close to the target entrance. Fragment (protons) trajectories were approximated as straight lines, and a beam back-projection algorithm, utilizing interaction positions recorded in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.Keywords: radiotherapy, carbon therapy, monitor secondary proton doses, interaction vertex imaging
Procedia PDF Downloads 78330 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction
Abstract:
Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.
Procedia PDF Downloads 89329 A Systematic Review of Antimicrobial Resistance in Fish and Poultry – Health and Environmental Implications for Animal Source Food Production in Egypt, Nigeria, and South Africa
Authors: Ekemini M. Okon, Reuben C. Okocha, Babatunde T. Adesina, Judith O. Ehigie, Babatunde M. Falana, Boluwape T. Okikiola
Abstract:
Antimicrobial resistance (AMR) has evolved to become a significant threat to global public health and food safety. The development of AMR in animals has been associated with antimicrobial overuse. In recent years, the number of antimicrobials used in food animals such as fish and poultry has escalated. It, therefore, becomes imperative to understand the patterns of AMR in fish and poultry and map out future directions for better surveillance efforts. This study used the Preferred Reporting Items for Systematic reviews and Meta-Analyses(PRISMA) to assess the trend, patterns, and spatial distribution for AMR research in Egypt, Nigeria, and South Africa. A literature search was conducted through the Scopus and Web of Science databases in which published studies on AMR between 1989 and 2021 were assessed. A total of 172 articles were relevant for this study. The result showed progressive attention on AMR studies in fish and poultry from 2018 to 2021 across the selected countries. The period between 2018 (23 studies) and 2021 (25 studies) showed a significant increase in AMR publications with a peak in 2019 (28 studies). Egypt was the leading exponent of AMR research (43%, n=74) followed by Nigeria (40%, n=69), then South Africa (17%, n=29). AMR studies in fish received relatively little attention across countries. The majority of the AMR studies were on poultry in Egypt (82%, n=61), Nigeria (87%, n=60), and South Africa (83%, n=24). Further, most of the studies were on Escherichia and Salmonella species. Antimicrobials frequently researched were ampicillin, erythromycin, tetracycline, trimethoprim, chloramphenicol, and sulfamethoxazole groups. Multiple drug resistance was prevalent, as demonstrated by antimicrobial resistance patterns. In poultry, Escherichia coli isolates were resistant to cefotaxime, streptomycin, chloramphenicol, enrofloxacin, gentamycin, ciprofloxacin, oxytetracycline, kanamycin, nalidixic acid, tetracycline, trimethoprim/sulphamethoxazole, erythromycin, and ampicillin. Salmonella enterica serovars were resistant to tetracycline, trimethoprim/sulphamethoxazole, cefotaxime, and ampicillin. Staphylococcusaureus showed high-level resistance to streptomycin, kanamycin, erythromycin, cefoxitin, trimethoprim, vancomycin, ampicillin, and tetracycline. Campylobacter isolates were resistant to ceftriaxone, erythromycin, ciprofloxacin, tetracycline, and nalidixic acid at varying degrees. In fish, Enterococcus isolates showed resistance to penicillin, ampicillin, chloramphenicol, vancomycin, and tetracycline but sensitive to ciprofloxacin, erythromycin, and rifampicin. Isolated strains of Vibrio species showed sensitivity to florfenicol and ciprofloxacin, butresistance to trimethoprim/sulphamethoxazole and erythromycin. Isolates of Aeromonas and Pseudomonas species exhibited resistance to ampicillin and amoxicillin. Specifically, Aeromonashydrophila isolates showed sensitivity to cephradine, doxycycline, erythromycin, and florfenicol. However, resistance was also exhibited against augmentinandtetracycline. The findings constitute public and environmental health threats and suggest the need to promote and advance AMR research in other countries, particularly those on the global hotspot for antimicrobial use.Keywords: antibiotics, antimicrobial resistance, bacteria, environment, public health
Procedia PDF Downloads 200328 Assessing Sexual and Reproductive Health Literacy and Engagement Among Refugee and Immigrant Women in Massachusetts: A Qualitative Community-Based Study
Authors: Leen Al Kassab, Sarah Johns, Helen Noble, Nawal Nour, Elizabeth Janiak, Sarrah Shahawy
Abstract:
Introduction: Immigrant and refugee women experience disparities in sexual and reproductive health (SRH) outcomes, partially as a result of barriers to SRH literacy and to regular healthcare access and engagement. Despite the existing data highlighting growing needs for culturally relevant and structurally competent care, interventions are scarce and not well-documented. Methods: In this IRB-approved study, we used a community-based participatory research approach, with the assistance of a community advisory board, to conduct a qualitative needs assessment of SRH knowledge and service engagement with immigrant and refugee women from Africa or the Middle East and currently residing in Boston. We conducted a total of nine focus group discussions (FGDs) in partnership with medical, community, and religious centers, in six languages: Arabic, English, French, Somali, Pashtu, and Dari. A total of 44 individuals participated. We explored migrant and refugee women’s current and evolving SRH care needs and gaps, specifically related to the development of interventions and clinical best practices targeting SRH literacy, healthcare engagement, and informed decision-making. Recordings of the FGDs were transcribed verbatim and translated by interpreter services. We used open coding with multiple coders who resolved discrepancies through consensus and iteratively refined our codebook while coding data in batches using Dedoose software. Results: Participants reported immigrant adaptation experiences, discrimination, and feelings of trust, autonomy, privacy, and connectedness to family, community, and the healthcare system as factors surrounding SRH knowledge and needs. The context of previously learned SRH knowledge was commonly noted to be in schools, at menstruation, before marriage, from family members, partners, friends, and online search engines. Common themes included empowering strength drawn from religious and cultural communities, difficulties bridging educational gaps with their US- born daughters, and a desire for more SRH education from multiple sources, including family, health care providers, and religious experts & communities. Regarding further SRH education, participants’ preferences varied regarding ideal platform (virtual vs. in-person), location (in religious and community centers or not), smaller group sizes, and the involvement of men. Conclusions: Based on these results, empowering SRH initiatives should include both community and religious center-based, as well as clinic-based, interventions. Interventions should be composed of frequent educational workshops in small groups involving age-grouped women, daughters, and (sometimes) men, tailored SRH messaging, and the promotion of culturally, religiously, and linguistically competent care.Keywords: community, immigrant, religion, sexual & reproductive health, women's health
Procedia PDF Downloads 127327 Nigerian Football System: Examining Micro-Level Practices against a Global Model for Integrated Development of Mass and Elite Sport
Authors: Iorwase Derek Kaka’an, Peter Smolianov, Steven Dion, Christopher Schoen, Jaclyn Norberg, Charles Gabriel Iortimah
Abstract:
This study examines the current state of football in Nigeria to identify the country's practices, which could be useful internationally, and to determine areas for improvement. Over 200 sources of literature on sport delivery systems in successful sports nations were analyzed to construct a globally applicable model of elite football integrated with mass participation, comprising of the following three levels: macro (socio-economic, cultural, legislative, and organizational), meso (infrastructures, personnel, and services enabling sports programs) and micro level (operations, processes, and methodologies for the development of individual athletes). The model has received scholarly validation and has shown to be a framework for program analysis that is not culturally bound. It has recently been utilized for further understanding such sports systems as US rugby, tennis, soccer, swimming, and volleyball, as well as Dutch and Russian swimming. A questionnaire was developed using the above-mentioned model. Survey questions were validated by 12 experts including academicians, executives from sports governing bodies, football coaches, and administrators. To identify best practices and determine areas for improvement of football in Nigeria, 116 coaches completed the questionnaire. Useful exemplars and possible improvements were further identified through semi-structured discussions with 10 Nigerian football administrators and experts. Finally, a content analysis of the Nigeria Football Federation's website and organizational documentation was conducted. This paper focuses on the micro level of Nigerian football delivery, particularly talent search and development as well as advanced athlete preparation and support. Results suggested that Nigeria could share such progressive practices as the provision of football programs in all schools and full-time coaches paid by governments based on the level of coach education. Nigerian football administrators and coaches could provide better football services affordable for all, where success in mass and elite sports is guided by science focused on athletes' needs. Better implemented could be international best practices such as lifelong guidelines for health and excellence of everyone and integration of fitness tests into player development and ranking as done in best Dutch, English, French, Russian, Spanish, and other European clubs; integration of educational and competitive events for elite and developing athletes as well as fans as done at the 2018 World Cup Russia; and academies with multi-stage athlete nurturing as done by Ajax in Africa as well as Barcelona FC and other top clubs expanding across the world. The methodical integration of these practices into the balanced development of mass and elite football will help contribute to international sports success as well as national health, education, crime control, and social harmony in Nigeria.Keywords: football, high performance, mass participation, Nigeria, sport development
Procedia PDF Downloads 70326 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 162325 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2
Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk
Abstract:
Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.Keywords: ecosystem services, grassland management, machine learning, remote sensing
Procedia PDF Downloads 218324 Improving a Stagnant River Reach Water Quality by Combining Jet Water Flow and Ultrasonic Irradiation
Authors: A. K. Tekile, I. L. Kim, J. Y. Lee
Abstract:
Human activities put freshwater quality under risk, mainly due to expansion of agriculture and industries, damming, diversion and discharge of inadequately treated wastewaters. The rapid human population growth and climate change escalated the problem. External controlling actions on point and non-point pollution sources are long-term solution to manage water quality. To have a holistic approach, these mechanisms should be coupled with the in-water control strategies. The available in-lake or river methods are either costly or they have some adverse effect on the ecological system that the search for an alternative and effective solution with a reasonable balance is still going on. This study aimed at the physical and chemical water quality improvement in a stagnant Yeo-cheon River reach (Korea), which has recently shown sign of water quality problems such as scum formation and fish death. The river water quality was monitored, for the duration of three months by operating only water flow generator in the first two weeks and then ultrasonic irradiation device was coupled to the flow unit for the remaining duration of the experiment. In addition to assessing the water quality improvement, the correlation among the parameters was analyzed to explain the contribution of the ultra-sonication. Generally, the combined strategy showed localized improvement of water quality in terms of dissolved oxygen, Chlorophyll-a and dissolved reactive phosphate. At locations under limited influence of the system operation, chlorophyll-a was highly increased, but within 25 m of operation the low initial value was maintained. The inverse correlation coefficient between dissolved oxygen and chlorophyll-a decreased from 0.51 to 0.37 when ultrasonic irradiation unit was used with the flow, showing that ultrasonic treatment reduced chlorophyll-a concentration and it inhibited photosynthesis. The relationship between dissolved oxygen and reactive phosphate also indicated that influence of ultra-sonication was higher than flow on the reactive phosphate concentration. Even though flow increased turbidity by suspending sediments, ultrasonic waves canceled out the effect due to the agglomeration of suspended particles and the follow-up settling out. There has also been variation of interaction in the water column as the decrease of pH and dissolved oxygen from surface to the bottom played a role in phosphorus release into the water column. The variation of nitrogen and dissolved organic carbon concentrations showed mixed trend probably due to the complex chemical reactions subsequent to the operation. Besides, the intensive rainfall and strong wind around the end of the field trial had apparent impact on the result. The combined effect of water flow and ultrasonic irradiation was a cumulative water quality improvement and it maintained the dissolved oxygen and chlorophyll-a requirement of the river for healthy ecological interaction. However, the overall improvement of water quality is not guaranteed as effectiveness of ultrasonic technology requires long-term monitoring of water quality before, during and after treatment. Even though, the short duration of the study conducted here has limited nutrient pattern realization, the use of ultrasound at field scale to improve water quality is promising.Keywords: stagnant, ultrasonic irradiation, water flow, water quality
Procedia PDF Downloads 193323 Constructing a Semi-Supervised Model for Network Intrusion Detection
Authors: Tigabu Dagne Akal
Abstract:
While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.Keywords: intrusion detection, data mining, computer science, data mining
Procedia PDF Downloads 296322 Executive Function and Attention Control in Bilingual and Monolingual Children: A Systematic Review
Authors: Zihan Geng, L. Quentin Dixon
Abstract:
It has been proposed that early bilingual experience confers a number of advantages in the development of executive control mechanisms. Although the literature provides empirical evidence for bilingual benefits, some studies also reported null or mixed results. To make sense of these contradictory findings, the current review synthesize recent empirical studies investigating bilingual effects on children’s executive function and attention control. The publication time of the studies included in the review ranges from 2010 to 2017. The key searching terms are bilingual, bilingualism, children, executive control, executive function, and attention. The key terms were combined within each of the following databases: ERIC (EBSCO), Education Source, PsycINFO, and Social Science Citation Index. Studies involving both children and adults were also included but the analysis was based on the data generated only by the children group. The initial search yielded 137 distinct articles. Twenty-eight studies from 27 articles with a total of 3367 participants were finally included based on the selection criteria. The selective studies were then coded in terms of (a) the setting (i.e., the country where the data was collected), (b) the participants (i.e., age and languages), (c) sample size (i.e., the number of children in each group), (d) cognitive outcomes measured, (e) data collection instruments (i.e., cognitive tasks and tests), and (f) statistic analysis models (e.g., t-test, ANOVA). The results show that the majority of the studies were undertaken in western countries, mainly in the U.S., Canada, and the UK. A variety of languages such as Arabic, French, Dutch, Welsh, German, Spanish, Korean, and Cantonese were involved. In relation to cognitive outcomes, the studies examined children’s overall planning and problem-solving abilities, inhibition, cognitive complexity, working memory (WM), and sustained and selective attention. The results indicate that though bilingualism is associated with several cognitive benefits, the advantages seem to be weak, at least, for children. Additionally, the nature of the cognitive measures was found to greatly moderate the results. No significant differences are observed between bilinguals and monolinguals in overall planning and problem-solving ability, indicating that there is no bilingual benefit in the cooperation of executive function components at an early age. In terms of inhibition, the mixed results suggest that bilingual children, especially young children, may have better conceptual inhibition measured in conflict tasks, but not better response inhibition measured by delay tasks. Further, bilingual children showed better inhibitory control to bivalent displays, which resembles the process of maintaining two language systems. The null results were obtained for both cognitive complexity and WM, suggesting no bilingual advantage in these two cognitive components. Finally, findings on children’s attention system associate bilingualism with heightened attention control. Together, these findings support the hypothesis of cognitive benefits for bilingual children. Nevertheless, whether these advantages are observable appears to highly depend on the cognitive assessments. Therefore, future research should be more specific about the cognitive outcomes (e.g., the type of inhibition) and should report the validity of the cognitive measures consistently.Keywords: attention, bilingual advantage, children, executive function
Procedia PDF Downloads 185321 Study of Morning-Glory Spillway Structure in Hydraulic Characteristics by CFD Model
Authors: Mostafa Zandi, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. Morning-Glory spillway is one of the common spillways for discharging the overflow water behind dams, these kinds of spillways are constructed in dams with small reservoirs. In this research, the hydraulic flow characteristics of a morning-glory spillways are investigated with CFD model. Two dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k- and k-, were chosen to model Reynolds shear stress term. The power law scheme was used for discretization of momentum, k , and equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k -ε (Standard) has the most consistent results with experimental results. When the jet is getting closer to end of basin, the computational results increase with the numerical results of their differences. The lower profile of the water jet has less sensitivity to the hydraulic jet profile than the hydraulic jet profile. In the pressure test, it was also found that the results show that the numerical values of the pressure in the lower landing number differ greatly in experimental results. The characteristics of the complex flows over a Morning-Glory spillway were studied numerically using a RANS solver. Grid study showed that numerical results of a 57512-node grid had the best agreement with the experimental values. The desired downstream channel length was preferred to be 1.5 meter, and the standard k-ε turbulence model produced the best results in Morning-Glory spillway. The numerical free-surface profiles followed the theoretical equations very well.Keywords: morning-glory spillway, CFD model, hydraulic characteristics, wall function
Procedia PDF Downloads 77320 Cultural Competence in Palliative Care
Authors: Mariia Karizhenskaia, Tanvi Nandani, Ali Tafazoli Moghadam
Abstract:
Hospice palliative care (HPC) is one of the most complicated philosophies of care in which physical, social/cultural, and spiritual aspects of human life are intermingled with an undeniably significant role in every aspect. Among these dimensions of care, culture possesses an outstanding position in the process and goal determination of HPC. This study shows the importance of cultural elements in the establishment of effective and optimized structures of HPC in the Canadian healthcare environment. Our systematic search included Medline, Google Scholar, and St. Lawrence College Library, considering original, peer-reviewed research papers published from 1998 to 2023 to identify recent national literature connecting culture and palliative care delivery. The most frequently presented feature among the articles is the role of culture in the efficiency of the HPC. It has been shown frequently that including the culturespecific parameters of each nation in this system of care is vital for its success. On the other hand, ignorance about the exclusive cultural trends in a specific location has been accompanied by significant failure rates. Accordingly, implementing a culture-wise adaptable approach is mandatory for multicultural societies. The following outcome of research studies in this field underscores the importance of culture-oriented education for healthcare staff. Thus, all the practitioners involved in HPC will recognize the importance of traditions, religions, and social habits for processing the care requirements. Cultural competency training is a telling sample of the establishment of this strategy in health care that has come to the aid of HPC in recent years. Another complexity of the culturized HPC nowadays is the long-standing issue of racialization. Systematic and subconscious deprivation of minorities has always been an adversity of advanced levels of care. The last part of the constellation of our research outcomes is comprised of the ethical considerations of culturally driven HPC. This part is the most sophisticated aspect of our topic because almost all the analyses, arguments, and justifications are subjective. While there was no standard measure for ethical elements in clinical studies with palliative interventions, many research teams endorsed applying ethical principles for all the involved patients. Notably, interpretations and projections of ethics differ in varying cultural backgrounds. Therefore, healthcare providers should always be aware of the most respectable methodologies of HPC on a case-by-case basis. Cultural training programs have been utilized as one of the main tactics to improve the ability of healthcare providers to address the cultural needs and preferences of diverse patients and families. In this way, most of the involved health care practitioners will be equipped with cultural competence. Considerations for ethical and racial specifications of the clients of this service will boost the effectiveness and fruitfulness of the HPC. Canadian society is a colorful compilation of multiple nationalities; accordingly, healthcare clients are diverse, and this divergence is also translated into HPC patients. This fact justifies the importance of studying all the cultural aspects of HPC to provide optimal care on this enormous land.Keywords: cultural competence, end-of-life care, hospice, palliative care
Procedia PDF Downloads 74319 Investigation of the Working Processes in Thermocompressor Operating on Cryogenic Working Fluid
Authors: Evgeny V. Blagin, Aleksandr I. Dovgjallo, Dmitry A. Uglanov
Abstract:
This article deals with research of the working process in the thermocompressor which operates on cryogenic working fluid. Thermocompressor is device suited for the conversation of heat energy directly to the potential energy of pressure. Suggested thermocompressor is suited for operation during liquid natural gas (LNG) re-gasification and is placed after evaporator. Such application of thermocompressor allows using of the LNG cold energy for rising of working fluid pressure, which then can be used for electricity generation or another purpose. Thermocompressor consists of two chambers divided by the regenerative heat exchanger. Calculation algorithm for unsteady calculation of thermocompressor working process was suggested. The results of this investigation are to change of thermocompressor’s chambers temperature and pressure during the working cycle. These distributions help to find out the parameters, which significantly influence thermocompressor efficiency. These parameters include regenerative heat exchanger coefficient of the performance (COP) dead volume of the chambers, working frequency of the thermocompressor etc. Exergy analysis was performed to estimate thermocompressor efficiency. Cryogenic thermocompressor operated on nitrogen working fluid was chosen as a prototype. Calculation of the temperature and pressure change was performed with taking into account heat fluxes through regenerator and thermocompressor walls. Temperature of the cold chamber significantly differs from the results of steady calculation, which is caused by friction of the working fluid in regenerator and heat fluxes from the hot chamber. The rise of the cold chamber temperature leads to decreasing of thermocompressor delivery volume. Temperature of hot chamber differs negligibly because losses due to heat fluxes to a cold chamber are compensated by the friction of the working fluid in the regenerator. Optimal working frequency was selected. Main results of the investigation: -theoretical confirmation of thermocompressor operation capability on the cryogenic working fluid; -optimal working frequency was found; -value of the cold chamber temperature differs from the starting value much more than the temperature of the hot chamber; -main parameters which influence thermocompressor performance are regenerative heat exchanger COP and heat fluxes through regenerator and thermocompressor walls.Keywords: cold energy, liquid natural gas, thermocompressor, regenerative heat exchanger
Procedia PDF Downloads 582318 Effect of Renin Angiotensin Pathway Inhibition on the Efficacy of Anti-programmed Cell Death (PD-1/L-1) Inhibitors in Advanced Non-small Cell Lung Cancer Patients- Comparison of Single Hospital Retrospective Assessment to the Published Literature
Authors: Esther Friedlander, Philip Friedlander
Abstract:
The use of immunotherapy that inhibits programmed death-1 (PD-1) or its ligand PD-L1 confers survival benefits in patients with non-small cell lung cancer (NSCLC). However, approximately 45% of patients experience primary treatment resistance, necessitating the development of strategies to improve efficacy. While the renin-angiotensin system (RAS) has systemic hemodynamic effects, tissue-specific regulation exists along with modulation of immune activity in part through regulation of myeloid cell activity, leading to the hypothesis that RAS inhibition may improve anti-PD-1/L-1 efficacy. A retrospective analysis was conducted that included 173 advanced solid tumor cancer patients treated at Valley Hospital, a community Hospital in New Jersey, USA, who were treated with a PD-1/L-1 inhibitor in a defined time period showing a statistically significant relationship between RAS pathway inhibition (RASi through concomitant treatment with an ACE inhibitor or angiotensin receptor blocker) and positive efficacy to the immunotherapy that was independent of age, gender and cancer type. Subset analysis revealed strong numerical benefit for efficacy in both patients with squamous and nonsquamous NSCLC as determined by documented clinician assessment of efficacy and by duration of therapy. A PUBMED literature search was now conducted to identify studies assessing the effect of RAS pathway inhibition on anti-PD-1/L1 efficacy in advanced solid tumor patients and compare these findings to those seen in the Valley Hospital retrospective study with a focus on NSCLC specifically. A total of 11 articles were identified assessing the effects of RAS pathway inhibition on the efficacy of checkpoint inhibitor immunotherapy in advanced cancer patients. Of the 11 studies, 10 assessed the effect on survival of RASi in the context of treatment with anti-PD-1/PD-L1, while one assessed the effect on CTLA-4 inhibition. Eight of the studies included patients with NSCLC, while the remaining 2 were specific to genitourinary malignancies. Of the 8 studies, two were specific to NSCLC patients, with the remaining 6 studies including a range of cancer types, of which NSCLC was one. Of these 6 studies, only 2 reported specific survival data for the NSCLC subpopulation. Patient characteristics, multivariate analysis data and efficacy data seen in the 2 NSLCLC specific studies and in the 2 basket studies, which provided data on the NSCLC subpopulation, were compared to that seen in the Valley Hospital retrospective study supporting a broader effect of RASi on anti-PD-1/L1 efficacy in advanced NSLCLC with the majority of studies showing statistically significant benefit or strong statistical trends but with one study demonstrating worsened outcomes. This comparison of studies extends published findings to the community hospital setting and supports prospective assessment through randomized clinical trials of efficacy in NSCLC patients with pharmacodynamic components to determine the effect on immune cell activity in tumors and on the composition of the tumor microenvironment.Keywords: immunotherapy, cancer, angiotensin, efficacy, PD-1, lung cancer, NSCLC
Procedia PDF Downloads 69317 Modelling and Simulation of Hysteresis Current Controlled Single-Phase Grid-Connected Inverter
Authors: Evren Isen
Abstract:
In grid-connected renewable energy systems, input power is controlled by AC/DC converter or/and DC/DC converter depending on output voltage of input source. The power is injected to DC-link, and DC-link voltage is regulated by inverter controlling the grid current. Inverter performance is considerable in grid-connected renewable energy systems to meet the utility standards. In this paper, modelling and simulation of hysteresis current controlled single-phase grid-connected inverter that is utilized in renewable energy systems, such as wind and solar systems, are presented. 2 kW single-phase grid-connected inverter is simulated in Simulink and modeled in Matlab-m-file. The grid current synchronization is obtained by phase locked loop (PLL) technique in dq synchronous rotating frame. Although dq-PLL can be easily implemented in three-phase systems, there is difficulty to generate β component of grid voltage in single-phase system because single-phase grid voltage exists. Inverse-Park PLL with low-pass filter is used to generate β component for grid angle determination. As grid current is controlled by constant bandwidth hysteresis current control (HCC) technique, average switching frequency and variation of switching frequency in a fundamental period are considered. 3.56% total harmonic distortion value of grid current is achieved with 0.5 A bandwidth. Average value of switching frequency and total harmonic distortion curves for different hysteresis bandwidth are obtained from model in m-file. Average switching frequency is 25.6 kHz while switching frequency varies between 14 kHz-38 kHz in a fundamental period. The average and maximum frequency difference should be considered for selection of solid state switching device, and designing driver circuit. Steady-state and dynamic response performances of the inverter depending on the input power are presented with waveforms. The control algorithm regulates the DC-link voltage by adjusting the output power.Keywords: grid-connected inverter, hysteresis current control, inverter modelling, single-phase inverter
Procedia PDF Downloads 479316 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT
Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar
Abstract:
X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum
Procedia PDF Downloads 400315 Multi-Objective Discrete Optimization of External Thermal Insulation Composite Systems in Terms of Thermal and Embodied Energy Performance
Authors: Berfin Yildiz
Abstract:
These days, increasing global warming effects, limited amount of energy resources, etc., necessitates the awareness that must be present in every profession group. The architecture and construction sectors are responsible for both the embodied and operational energy of the materials. This responsibility has led designers to seek alternative solutions for energy-efficient material selection. The choice of energy-efficient material requires consideration of the entire life cycle, including the building's production, use, and disposal energy. The aim of this study is to investigate the method of material selection of external thermal insulation composite systems (ETICS). Embodied and in-use energy values of material alternatives were used for the evaluation in this study. The operational energy is calculated according to the u-value calculation method defined in the TS 825 (Thermal Insulation Requirements) standard for Turkey, and the embodied energy is calculated based on the manufacturer's Energy Performance Declaration (EPD). ETICS consists of a wall, adhesive, insulation, lining, mechanical, mesh, and exterior finishing materials. In this study, lining, mechanical, and mesh materials were ignored because EPD documents could not be obtained. The material selection problem is designed as a hypothetical volume area (5x5x3m) and defined as a multi-objective discrete optimization problem for external thermal insulation composite systems. Defining the problem as a discrete optimization problem is important in order to choose between materials of various thicknesses and sizes. Since production and use energy values, which are determined as optimization objectives in the study, are often conflicting values, material selection is defined as a multi-objective optimization problem, and it is aimed to obtain many solution alternatives by using Hypervolume (HypE) algorithm. The enrollment process started with 100 individuals and continued for 50 generations. According to the obtained results, it was observed that autoclaved aerated concrete and Ponce block as wall material, glass wool, as insulation material gave better results.Keywords: embodied energy, multi-objective discrete optimization, performative design, thermal insulation
Procedia PDF Downloads 141314 Systematic Review of Quantitative Risk Assessment Tools and Their Effect on Racial Disproportionality in Child Welfare Systems
Authors: Bronwen Wade
Abstract:
Over the last half-century, child welfare systems have increasingly relied on quantitative risk assessment tools, such as actuarial or predictive risk tools. These tools are developed by performing statistical analysis of how attributes captured in administrative data are related to future child maltreatment. Some scholars argue that attributes in administrative data can serve as proxies for race and that quantitative risk assessment tools reify racial bias in decision-making. Others argue that these tools provide more “objective” and “scientific” guides for decision-making instead of subjective social worker judgment. This study performs a systematic review of the literature on the impact of quantitative risk assessment tools on racial disproportionality; it examines methodological biases in work on this topic, summarizes key findings, and provides suggestions for further work. A search of CINAHL, PsychInfo, Proquest Social Science Premium Collection, and the ProQuest Dissertations and Theses Collection was performed. Academic and grey literature were included. The review includes studies that use quasi-experimental methods and development, validation, or re-validation studies of quantitative risk assessment tools. PROBAST (Prediction model Risk of Bias Assessment Tool) and CHARMS (CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies) were used to assess the risk of bias and guide data extraction for risk development, validation, or re-validation studies. ROBINS-I (Risk of Bias in Non-Randomized Studies of Interventions) was used to assess for bias and guide data extraction for the quasi-experimental studies identified. Due to heterogeneity among papers, a meta-analysis was not feasible, and a narrative synthesis was conducted. 11 papers met the eligibility criteria, and each has an overall high risk of bias based on the PROBAST and ROBINS-I assessments. This is deeply concerning, as major policy decisions have been made based on a limited number of studies with a high risk of bias. The findings on racial disproportionality have been mixed and depend on the tool and approach used. Authors use various definitions for racial equity, fairness, or disproportionality. These concepts of statistical fairness are connected to theories about the reason for racial disproportionality in child welfare or social definitions of fairness that are usually not stated explicitly. Most findings from these studies are unreliable, given the high degree of bias. However, some of the less biased measures within studies suggest that quantitative risk assessment tools may worsen racial disproportionality, depending on how disproportionality is mathematically defined. Authors vary widely in their approach to defining and addressing racial disproportionality within studies, making it difficult to generalize findings or approaches across studies. This review demonstrates the power of authors to shape policy or discourse around racial justice based on their choice of statistical methods; it also demonstrates the need for improved rigor and transparency in studies of quantitative risk assessment tools. Finally, this review raises concerns about the impact that these tools have on child welfare systems and racial disproportionality.Keywords: actuarial risk, child welfare, predictive risk, racial disproportionality
Procedia PDF Downloads 54313 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform
Authors: Khadija Refouh
Abstract:
Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms
Procedia PDF Downloads 149312 Groundwater Potential Delineation Using Geodetector Based Convolutional Neural Network in the Gunabay Watershed of Ethiopia
Authors: Asnakew Mulualem Tegegne, Tarun Kumar Lohani, Abunu Atlabachew Eshete
Abstract:
Groundwater potential delineation is essential for efficient water resource utilization and long-term development. The scarcity of potable and irrigation water has become a critical issue due to natural and anthropogenic activities in meeting the demands of human survival and productivity. With these constraints, groundwater resources are now being used extensively in Ethiopia. Therefore, an innovative convolutional neural network (CNN) is successfully applied in the Gunabay watershed to delineate groundwater potential based on the selected major influencing factors. Groundwater recharge, lithology, drainage density, lineament density, transmissivity, and geomorphology were selected as major influencing factors during the groundwater potential of the study area. For dataset training, 70% of samples were selected and 30% were used for serving out of the total 128 samples. The spatial distribution of groundwater potential has been classified into five groups: very low (10.72%), low (25.67%), moderate (31.62%), high (19.93%), and very high (12.06%). The area obtains high rainfall but has a very low amount of recharge due to a lack of proper soil and water conservation structures. The major outcome of the study showed that moderate and low potential is dominant. Geodetoctor results revealed that the magnitude influences on groundwater potential have been ranked as transmissivity (0.48), recharge (0.26), lineament density (0.26), lithology (0.13), drainage density (0.12), and geomorphology (0.06). The model results showed that using a convolutional neural network (CNN), groundwater potentiality can be delineated with higher predictive capability and accuracy. CNN-based AUC validation platform showed that 81.58% and 86.84% were accrued from the accuracy of training and testing values, respectively. Based on the findings, the local government can receive technical assistance for groundwater exploration and sustainable water resource development in the Gunabay watershed. Finally, the use of a detector-based deep learning algorithm can provide a new platform for industrial sectors, groundwater experts, scholars, and decision-makers.Keywords: CNN, geodetector, groundwater influencing factors, Groundwater potential, Gunabay watershed
Procedia PDF Downloads 22311 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System
Authors: Dong Seop Lee, Byung Sik Kim
Abstract:
In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.Keywords: disaster information management, unstructured data, optical character recognition, machine learning
Procedia PDF Downloads 129310 Pricing Techniques to Mitigate Recurring Congestion on Interstate Facilities Using Dynamic Feedback Assignment
Authors: Hatem Abou-Senna
Abstract:
Interstate 4 (I-4) is a primary east-west transportation corridor between Tampa and Daytona cities, serving commuters, commercial and recreational traffic. I-4 is known to have severe recurring congestion during peak hours. The congestion spans about 11 miles in the evening peak period in the central corridor area as it is considered the only non-tolled limited access facility connecting the Orlando Central Business District (CBD) and the tourist attractions area (Walt Disney World). Florida officials had been skeptical of tolling I-4 prior to the recent legislation, and the public through the media had been complaining about the excessive toll facilities in Central Florida. So, in search for plausible mitigation to the congestion on the I-4 corridor, this research is implemented to evaluate the effectiveness of different toll pricing alternatives that might divert traffic from I-4 to the toll facilities during the peak period. The network is composed of two main diverging limited access highways, freeway (I-4) and toll road (SR 417) in addition to two east-west parallel toll roads SR 408 and SR 528, intersecting the above-mentioned highways from both ends. I-4 and toll road SR 408 are the most frequently used route by commuters. SR-417 is a relatively uncongested toll road with 15 miles longer than I-4 and $5 tolls compared to no monetary cost on 1-4 for the same trip. The results of the calibrated Orlando PARAMICS network showed that percentages of route diversion vary from one route to another and depends primarily on the travel cost between specific origin-destination (O-D) pairs. Most drivers going from Disney (O1) or Lake Buena Vista (O2) to Lake Mary (D1) were found to have a high propensity towards using I-4, even when eliminating tolls and/or providing real-time information. However, a diversion from I-4 to SR 417 for these OD pairs occurred only in the cases of the incident and lane closure on I-4, due to the increase in delay and travel costs, and when information is provided to travelers. Furthermore, drivers that diverted from I-4 to SR 417 and SR 528 did not gain significant travel-time savings. This was attributed to the limited extra capacity of the alternative routes in the peak period and the longer traveling distance. When the remaining origin-destination pairs were analyzed, average travel time savings on I-4 ranged between 10 and 16% amounting to 10 minutes at the most with a 10% increase in the network average speed. High propensity of diversion on the network increased significantly when eliminating tolls on SR 417 and SR 528 while doubling the tolls on SR 408 along with the incident and lane closure scenarios on I-4 and with real-time information provided. The toll roads were found to be a viable alternative to I-4 for these specific OD pairs depending on the user perception of the toll cost which was reflected in their specific travel times. However, on the macroscopic level, it was concluded that route diversion through toll reduction or elimination on surrounding toll roads would only have a minimum impact on reducing I-4 congestion during the peak period.Keywords: congestion pricing, dynamic feedback assignment, microsimulation, paramics, route diversion
Procedia PDF Downloads 178309 Research on the Conservation Strategy of Territorial Landscape Based on Characteristics: The Case of Fujian, China
Authors: Tingting Huang, Sha Li, Geoffrey Griffiths, Martin Lukac, Jianning Zhu
Abstract:
Territorial landscapes have experienced a gradual loss of their typical characteristics during long-term human activities. In order to protect the integrity of regional landscapes, it is necessary to characterize, evaluate and protect them in a graded manner. The study takes Fujian, China, as an example and classifies the landscape characters of the site at the regional scale, middle scale, and detailed scale. A multi-scale approach combining parametric and holistic approaches is used to classify and partition the landscape character types (LCTs) and landscape character areas (LCAs) at different scales, and a multi-element landscape assessment approach is adopted to explore the conservation strategies of the landscape character. Firstly, multiple fields and multiple elements of geography, nature and humanities were selected as the basis of assessment according to the scales. Secondly, the study takes a parametric approach to the classification and partitioning of landscape character, Principal Component Analysis, and two-stage cluster analysis (K-means and GMM) in MATLAB software to obtain LCTs, combines with Canny Operator Edge Detection Algorithm to obtain landscape character contours and corrects LCTs and LCAs by field survey and manual identification methods. Finally, the study adopts the Landscape Sensitivity Assessment method to perform landscape character conservation analysis and formulates five strategies for different LCAs: conservation, enhancement, restoration, creation, and combination. This multi-scale identification approach can efficiently integrate multiple types of landscape character elements, reduce the difficulty of broad-scale operations in the process of landscape character conservation, and provide a basis for landscape character conservation strategies. Based on the natural background and the restoration of regional characteristics, the results of landscape character assessment are scientific and objective and can provide a strong reference in regional and national scale territorial spatial planning.Keywords: parameterization, multi-scale, landscape character identify, landscape character assessment
Procedia PDF Downloads 99308 Automatic Furrow Detection for Precision Agriculture
Authors: Manpreet Kaur, Cheol-Hong Min
Abstract:
The increasing advancement in the robotics equipped with machine vision sensors applied to precision agriculture is a demanding solution for various problems in the agricultural farms. An important issue related with the machine vision system concerns crop row and weed detection. This paper proposes an automatic furrow detection system based on real-time processing for identifying crop rows in maize fields in the presence of weed. This vision system is designed to be installed on the farming vehicles, that is, submitted to gyros, vibration and other undesired movements. The images are captured under image perspective, being affected by above undesired effects. The goal is to identify crop rows for vehicle navigation which includes weed removal, where weeds are identified as plants outside the crop rows. The images quality is affected by different lighting conditions and gaps along the crop rows due to lack of germination and wrong plantation. The proposed image processing method consists of four different processes. First, image segmentation based on HSV (Hue, Saturation, Value) decision tree. The proposed algorithm used HSV color space to discriminate crops, weeds and soil. The region of interest is defined by filtering each of the HSV channels between maximum and minimum threshold values. Then the noises in the images were eliminated by the means of hybrid median filter. Further, mathematical morphological processes, i.e., erosion to remove smaller objects followed by dilation to gradually enlarge the boundaries of regions of foreground pixels was applied. It enhances the image contrast. To accurately detect the position of crop rows, the region of interest is defined by creating a binary mask. The edge detection and Hough transform were applied to detect lines represented in polar coordinates and furrow directions as accumulations on the angle axis in the Hough space. The experimental results show that the method is effective.Keywords: furrow detection, morphological, HSV, Hough transform
Procedia PDF Downloads 231307 Feasibility of Voluntary Deep Inspiration Breath-Hold Radiotherapy Technique Implementation without Deep Inspiration Breath-Hold-Assisting Device
Authors: Auwal Abubakar, Shazril Imran Shaukat, Noor Khairiah A. Karim, Mohammed Zakir Kassim, Gokula Kumar Appalanaido, Hafiz Mohd Zin
Abstract:
Background: Voluntary deep inspiration breath-hold radiotherapy (vDIBH-RT) is an effective cardiac dose reduction technique during left breast radiotherapy. This study aimed to assess the accuracy of the implementation of the vDIBH technique among left breast cancer patients without the use of a special device such as a surface-guided imaging system. Methods: The vDIBH-RT technique was implemented among thirteen (13) left breast cancer patients at the Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia. Breath-hold monitoring was performed based on breath-hold skin marks and laser light congruence observed on zoomed CCTV images from the control console during each delivery. The initial setup was verified using cone beam computed tomography (CBCT) during breath-hold. Each field was delivered using multiple beam segments to allow a delivery time of 20 seconds, which can be tolerated by patients in breath-hold. The data were analysed using an in-house developed MATLAB algorithm. PTV margin was computed based on van Herk's margin recipe. Results: The setup error analysed from CBCT shows that the population systematic error in lateral (x), longitudinal (y), and vertical (z) axes was 2.28 mm, 3.35 mm, and 3.10 mm, respectively. Based on the CBCT image guidance, the Planning target volume (PTV) margin that would be required for vDIBH-RT using CCTV/Laser monitoring technique is 7.77 mm, 10.85 mm, and 10.93 mm in x, y, and z axes, respectively. Conclusion: It is feasible to safely implement vDIBH-RT among left breast cancer patients without special equipment. The breath-hold monitoring technique is cost-effective, radiation-free, easy to implement, and allows real-time breath-hold monitoring.Keywords: vDIBH, cone beam computed tomography, radiotherapy, left breast cancer
Procedia PDF Downloads 57306 A Semi-supervised Classification Approach for Trend Following Investment Strategy
Authors: Rodrigo Arnaldo Scarpel
Abstract:
Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation
Procedia PDF Downloads 89305 Awareness of Organic Products in Bangladesh: A Marketing Perspective
Authors: Sheikh Mohammed Rafiul Huque
Abstract:
Bangladesh since its inception has been an economy that is fuelled by agriculture and agriculture has significant contribution to the GDP of Bangladesh. The agriculture of Bangladesh predominantly and historically dependent on organic sources of raw material though the place has taken in decades by inorganic sources of raw materials due to the high demand of food for rapidly growing of population. Meanwhile, a new market segment, which is niche market, has been evolving in the urban area in favor of organic products, though 71.1% population living in rural areas is dependent mainly on conventional products. The new market segment is search of healthy and safer source of food and they could believe that organic products are the solution of that. In Bangladesh, food adulteration is very common practices among the shop-keepers to extend the shelf life of raw vegetables and fruits. The niche group of city dwellers is aware about the fact and gradually shifting their buying behavior to organic products. A recent survey on organic farming revealed that 16,200 hectares under organic farming in recent time, which was only 2,500 hectares in 2008. This study is focused on consumer awareness of organic products and tried to explore the factors affecting organic food consumption among high income group of people. The hypothesis is developed to explore the effect of gender (GENDER), ability to purchase (ABILITY) and health awareness (HEALTH) on purchase intention (INTENTION). A snowball sampling was administered among the high income group of people in Dhaka city among 150 respondents. In this sampling process the study could identify only those samples who has consume organic products. A Partial Least Square (PLS) method was used to analyze data using path analysis. It was revealed from the analysis that coefficient determination R2 is 0.829 for INTENTION endogenous latent variable. This means that three latent variables (GENDER, ABILITY, and HEALTH) significantly explain 82.9% of the variance in INTENTION of purchasing organic products. Moreover, GENDER solely explains 6.3% and 8.6% variability of ABILITY and HEALTH respectively. The inner model suggests that HEALTH has strongest negative effect on INTENTION (-0.647) followed by ABILITY (0.344) and GENDER (0.246). The hypothesized path relationship between ABILITY->INTENTION, HEALTH->INTENTION and GENDER->INTENTION are statistically significant. Furthermore, the hypothesized path relationship between GENDER->ABILITY (0.262) and GENDER->HEALTH (-0.292) also statistically significant. The purpose of the study is to demonstrate how an organic product producer can improve his participatory guarantee system (PGS) while marketing the products. The study focuses on understanding gender (GENDER), ability (ABILITY) and health (HEALTH) factors while positioning the products (INTENTION) in the mind of the consumer. In this study, the respondents are found to care about high price and ability to purchase variables with loading -0.920 and 0.898. They are good indicators of ability to purchase (ABILITY). The marketers should consider about price of organic comparing to conventional products while marketing, otherwise, that will create negative intention to buy with a loading of -0.939. Meanwhile, it is also revealed that believability of chemical free component in organic products and health awareness affects health (HEALTH) components with high loading -0.941 and 0.682. The study analyzes that low believability of chemical free component and high price of organic products affects intension to buy. The marketers should not overlook this point while targeting the consumers in Bangladesh.Keywords: health awareness, organic products, purchase ability, purchase intention
Procedia PDF Downloads 376304 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs Based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity, and aflatoxinogenic capacity of the strains, topography, soil, and climate parameters of the fig orchards, are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive, and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), the concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P, and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques, i.e., dimensionality reduction on the original dataset (principal component analysis), metric learning (Mahalanobis metric for clustering), and k-nearest neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson correlation coefficient (PCC) between observed and predicted values.Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia PDF Downloads 184303 Design and Development of an Autonomous Underwater Vehicle for Irrigation Canal Monitoring
Authors: Mamoon Masud, Suleman Mazhar
Abstract:
Indus river basin’s irrigation system in Pakistan is extremely complex, spanning over 50,000 km. Maintenance and monitoring of this demands enormous resources. This paper describes the development of a streamlined and low-cost autonomous underwater vehicle (AUV) for the monitoring of irrigation canals including water quality monitoring and water theft detection. The vehicle is a hovering-type AUV, designed mainly for monitoring irrigation canals, with fully documented design and open source code. It has a length of 17 inches, and a radius of 3.5 inches with a depth rating of 5m. Multiple sensors are present onboard the AUV for monitoring water quality parameters including pH, turbidity, total dissolved solids (TDS) and dissolved oxygen. A 9-DOF Inertial Measurement Unit (IMU), GY-85, is used, which incorporates an Accelerometer (ADXL345), a Gyroscope (ITG-3200) and a Magnetometer (HMC5883L). The readings from these sensors are fused together using directional cosine matrix (DCM) algorithm, providing the AUV with the heading angle, while a pressure sensor gives the depth of the AUV. 2 sonar-based range sensors are used for obstacle detection, enabling the vehicle to align itself with the irrigation canals edges. 4 thrusters control the vehicle’s surge, heading and heave, providing 3 DOF. The thrusters are controlled using a proportional-integral-derivative (PID) feedback control system, with heading angle and depth being the controller’s input and the thruster motor speed as the output. A flow sensor has been incorporated to monitor canal water level to detect water-theft event in the irrigation system. In addition to water theft detection, the vehicle also provides information on water quality, providing us with the ability to identify the source(s) of water contamination. Detection of such events can provide useful policy inputs for improving irrigation efficiency and reducing water contamination. The AUV being low cost, small sized and suitable for autonomous maneuvering, water level and quality monitoring in the irrigation canals, can be used for irrigation network monitoring at a large scale.Keywords: the autonomous underwater vehicle, irrigation canal monitoring, water quality monitoring, underwater line tracking
Procedia PDF Downloads 147302 Screening for Women with Chorioamnionitis: An Integrative Literature Review
Authors: Allison Herlene Du Plessis, Dalena (R.M.) Van Rooyen, Wilma Ten Ham-Baloyi, Sihaam Jardien-Baboo
Abstract:
Introduction: Women die in pregnancy and childbirth for five main reasons—severe bleeding, infections, unsafe abortions, hypertensive disorders (pre-eclampsia and eclampsia), and medical complications including cardiac disease, diabetes, or HIV/AIDS complicated by pregnancy. In 2015, WHO classified sepsis as the third highest cause for maternal mortalities in the world. Chorioamnionitis is a clinical syndrome of intrauterine infection during any stage of the pregnancy and it refers to ascending bacteria from the vaginal canal up into the uterus, causing infection. While the incidence rates for chorioamnionitis are not well documented, complications related to chorioamnionitis are well documented and midwives still struggle to identify this condition in time due to its complex nature. Few diagnostic methods are available in public health services, due to escalated laboratory costs. Often the affordable biomarkers, such as C-reactive protein CRP, full blood count (FBC) and WBC, have low significance in diagnosing chorioamnionitis. A lack of screening impacts on effective and timeous management of chorioamnionitis, and early identification and management of risks could help to prevent neonatal complications and reduce the subsequent series of morbidities and healthcare costs of infants who are health foci of perinatal infections. Objective: This integrative literature review provides an overview of current best research evidence on the screening of women at risk for chorioamnionitis. Design: An integrative literature review was conducted using a systematic electronic literature search through EBSCOhost, Cochrane Online, Wiley Online, PubMed, Scopus and Google. Guidelines, research studies, and reports in English related to chorioamnionitis from 2008 up until 2020 were included in the study. Findings: After critical appraisal, 31 articles were included. More than one third (67%) of the literature included ranked on the three highest levels of evidence (Level I, II and III). Data extracted regarding screening for chorioamnionitis was synthesized into four themes, namely: screening by clinical signs and symptoms, screening by causative factors of chorioamnionitis, screening of obstetric history, and essential biomarkers to diagnose chorioamnionitis. Key conclusions: There are factors that can be used by midwives to identify women at risk for chorioamnionitis. However, there are a paucity of established sociological, epidemiological and behavioral factors to screen this population. Several biomarkers are available to diagnose chorioamnionitis. Increased Interleukin-6 in amniotic fluid is the better indicator and strongest predictor of histological chorioamnionitis, whereas the available rapid matrix-metalloproteinase-8 test requires further testing. Maternal white blood cells count (WBC) has shown poor selectivity and sensitivity, and C-reactive protein (CRP) thresholds varied among studies and are not ideal for conclusive diagnosis of subclinical chorioamnionitis. Implications for practice: Screening of women at risk for chorioamnionitis by health care providers providing care for pregnant women, including midwives, is important for diagnosis and management before complications arise, particularly in resource-constraint settings.Keywords: chorioamnionitis, guidelines, best evidence, screening, diagnosis, pregnant women
Procedia PDF Downloads 123