Search results for: edge detection algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7210

Search results for: edge detection algorithm

460 Multi-Objective Discrete Optimization of External Thermal Insulation Composite Systems in Terms of Thermal and Embodied Energy Performance

Authors: Berfin Yildiz

Abstract:

These days, increasing global warming effects, limited amount of energy resources, etc., necessitates the awareness that must be present in every profession group. The architecture and construction sectors are responsible for both the embodied and operational energy of the materials. This responsibility has led designers to seek alternative solutions for energy-efficient material selection. The choice of energy-efficient material requires consideration of the entire life cycle, including the building's production, use, and disposal energy. The aim of this study is to investigate the method of material selection of external thermal insulation composite systems (ETICS). Embodied and in-use energy values of material alternatives were used for the evaluation in this study. The operational energy is calculated according to the u-value calculation method defined in the TS 825 (Thermal Insulation Requirements) standard for Turkey, and the embodied energy is calculated based on the manufacturer's Energy Performance Declaration (EPD). ETICS consists of a wall, adhesive, insulation, lining, mechanical, mesh, and exterior finishing materials. In this study, lining, mechanical, and mesh materials were ignored because EPD documents could not be obtained. The material selection problem is designed as a hypothetical volume area (5x5x3m) and defined as a multi-objective discrete optimization problem for external thermal insulation composite systems. Defining the problem as a discrete optimization problem is important in order to choose between materials of various thicknesses and sizes. Since production and use energy values, which are determined as optimization objectives in the study, are often conflicting values, material selection is defined as a multi-objective optimization problem, and it is aimed to obtain many solution alternatives by using Hypervolume (HypE) algorithm. The enrollment process started with 100 individuals and continued for 50 generations. According to the obtained results, it was observed that autoclaved aerated concrete and Ponce block as wall material, glass wool, as insulation material gave better results.

Keywords: embodied energy, multi-objective discrete optimization, performative design, thermal insulation

Procedia PDF Downloads 141
459 Enhancing Tower Crane Safety: A UAV-based Intelligent Inspection Approach

Authors: Xin Jiao, Xin Zhang, Jian Fan, Zhenwei Cai, Yiming Xu

Abstract:

Tower cranes play a crucial role in the construction industry, facilitating the vertical and horizontal movement of materials and aiding in building construction, especially for high-rise structures. However, tower crane accidents can lead to severe consequences, highlighting the importance of effective safety management and inspection. This paper presents an innovative approach to tower crane inspection utilizing Unmanned Aerial Vehicles (UAVs) and an Intelligent Inspection APP System. The system leverages UAVs equipped with high-definition cameras to conduct efficient and comprehensive inspections, reducing manual labor, inspection time, and risk. By integrating advanced technologies such as Real-Time Kinematic (RTK) positioning and digital image processing, the system enables precise route planning and collection of safety hazards images. A case study conducted on a construction site demonstrates the practicality and effectiveness of the proposed method, showcasing its potential to enhance tower crane safety. On-site testing of UAV intelligent inspections reveals key findings: efficient tower crane hazard inspection within 30 minutes, with a full-identification capability coverage rates of 76.3%, 64.8%, and 76.2% for major, significant, and general hazards respectively and a preliminary-identification capability coverage rates of 18.5%, 27.2%, and 19%, respectively. Notably, UAVs effectively identify various tower crane hazards, except for those requiring auditory detection. The limitations of this study primarily involve two aspects: Firstly, during the initial inspection, manual drone piloting is required for marking tower crane points, followed by automated flight inspections and reuse based on the marked route. Secondly, images captured by the drone necessitate manual identification and review, which can be time-consuming for equipment management personnel, particularly when dealing with a large volume of images. Subsequent research efforts will focus on AI training and recognition of safety hazard images, as well as the automatic generation of inspection reports and corrective management based on recognition results. The ongoing development in this area is currently in progress, and outcomes will be released at an appropriate time.

Keywords: tower crane, inspection, unmanned aerial vehicle (UAV), intelligent inspection app system, safety management

Procedia PDF Downloads 42
458 Fire Safety Assessment of At-Risk Groups

Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen

Abstract:

Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.

Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty

Procedia PDF Downloads 103
457 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties

Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret

Abstract:

Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.

Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker

Procedia PDF Downloads 121
456 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform

Authors: Khadija Refouh

Abstract:

Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.

Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms

Procedia PDF Downloads 149
455 A Rare Case Report of Non-Langerhans Cell Cutaneous Histiocytosis in a 6-Month Old Infant

Authors: Apoorva D. R.

Abstract:

INTRODUCTION: Hemophagocytic lymphohistiocytosis (HLH) is a severe, potentially fatal syndrome in which there is excessive immune activation. The disease is seen in children and people of all ages, but infants from birth to 18 months are most frequently affected. HLH is a sporadic or familial condition that can be triggered by various events that disturb immunological homeostasis. In cases with a genetic predisposition and sporadic occurrences, infection is a frequent trigger. Because of the rarity of this disease, the diverse clinical presentation, and the lack of specificity in the clinical and laboratory results, prompt treatment is essential, but the biggest obstacle to a favorable outcome is frequently a delay in identification. CASE REPORT: Here we report a case of a 6-month-old male infant who presented to the dermatology outpatient with disseminated skin lesions present over the face, abdomen, scalp, and bilateral upper and lower limbs for the past month. The lesions were insidious in onset, initially started over the abdomen, and gradually progressed to involve other body parts. The patient also had a history of fever which was moderate in grade, on and off in nature for 1 month. There were no significant complaints in the past, family, or drug history. There was no history of feeding difficulties in the baby. Parents gave a history of developmental milestones appropriate for age. Examination findings include multiple well-defined monomorphic erythematous papules with a central crater present over bilateral cheeks. Few lichenoid shiny papules present over bilateral arms, legs, and abdomen. Ultrasound of the abdomen and pelvis showed mild degree hepatosplenomegaly, intraabdominal lymphadenopathy, and bilateral inguinal lymphadenopathy. Routine blood investigations showed anemia and lymphopenia. Multiple X-rays of the skull, chest, and bilateral upper and lower limbs were done and were normal. Histopathology features were suggestive of non-Langerhans cell cutaneous histiocytosis. CONCLUSION: HLH is a fatal and rare disease. A high level of suspicion and an interdisciplinary approach among experienced clinicians, pathologists, and microbiologists to define the diagnosis and causative disease are key to diagnosing this case. Early detection and treatment can reduce patient morbidity and mortality.

Keywords: histiocytosis, non langerhans cell, case report, fatal, rare

Procedia PDF Downloads 89
454 Groundwater Potential Delineation Using Geodetector Based Convolutional Neural Network in the Gunabay Watershed of Ethiopia

Authors: Asnakew Mulualem Tegegne, Tarun Kumar Lohani, Abunu Atlabachew Eshete

Abstract:

Groundwater potential delineation is essential for efficient water resource utilization and long-term development. The scarcity of potable and irrigation water has become a critical issue due to natural and anthropogenic activities in meeting the demands of human survival and productivity. With these constraints, groundwater resources are now being used extensively in Ethiopia. Therefore, an innovative convolutional neural network (CNN) is successfully applied in the Gunabay watershed to delineate groundwater potential based on the selected major influencing factors. Groundwater recharge, lithology, drainage density, lineament density, transmissivity, and geomorphology were selected as major influencing factors during the groundwater potential of the study area. For dataset training, 70% of samples were selected and 30% were used for serving out of the total 128 samples. The spatial distribution of groundwater potential has been classified into five groups: very low (10.72%), low (25.67%), moderate (31.62%), high (19.93%), and very high (12.06%). The area obtains high rainfall but has a very low amount of recharge due to a lack of proper soil and water conservation structures. The major outcome of the study showed that moderate and low potential is dominant. Geodetoctor results revealed that the magnitude influences on groundwater potential have been ranked as transmissivity (0.48), recharge (0.26), lineament density (0.26), lithology (0.13), drainage density (0.12), and geomorphology (0.06). The model results showed that using a convolutional neural network (CNN), groundwater potentiality can be delineated with higher predictive capability and accuracy. CNN-based AUC validation platform showed that 81.58% and 86.84% were accrued from the accuracy of training and testing values, respectively. Based on the findings, the local government can receive technical assistance for groundwater exploration and sustainable water resource development in the Gunabay watershed. Finally, the use of a detector-based deep learning algorithm can provide a new platform for industrial sectors, groundwater experts, scholars, and decision-makers.

Keywords: CNN, geodetector, groundwater influencing factors, Groundwater potential, Gunabay watershed

Procedia PDF Downloads 22
453 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 129
452 Exploring Attitudes and Experiences of the Cervical Screening Programme in Brighton, United Kingdom

Authors: Kirsty Biggs, Peter Larsen-Disney

Abstract:

Background: The UK cervical screening programme significantly reduces cancer mortality through the early detection of abnormal cells. Despite this, over a quarter of eligible women choose not to attend their appointment. Objective: To qualitatively explore patients’ barriers to attending cervical smear appointments and identify key trends of cervical screening behaviour, knowledge, and attitudes in primary and secondary care. Methods: A cross-sectional study was conducted to evaluate smear services in Brighton and Hove using questionnaires in general practice and colposcopy. 226 patients participated in the voluntary questionnaire between 10/11/2017 and 02/02/2018. 118 patients were recruited from general practice surgeries and 108 from the colposcopy department. Women were asked about their smear knowledge, self-perceived risks factors, prior experiences and reasons for non-attendance. Demographic data was also collected. Results: Approximately a third of women did not engage in smear testing services. This was consistent across primary and secondary care groups. Over 90% were aware of the role of the screening process in relation to cervical cancer; however, over two thirds believed the smear was also a tool to screen for other pathologies. The most commonly cited reasons for non-attendance were negative emotions or previous experiences. Inconvenient appointment times were also commonly described. In a comparison of attenders versus non-attenders previous negative experiences (p < 0.01) and number of identified risk factors (p = 0.02) were statistically significant with non-attenders describing more prior negative smears and identifying more risk factors. Smear knowledge, risk perception and perceived importance of screening were not significant. Negative previous experiences were described in relation to poor bedside manner, pain, embarrassment and staff competency. Conclusions: In contrary to the literature, our white Caucasian cohort experienced significant barriers to accessing smear services. Women’s prior negative experiences are overriding their perceived importance to attend the screening programme; therefore, efforts need to focus on improving clinical experiences through auditing tools, training and providing a supportive appointment setting. Positive changes can also be expected by improving appointment availabilities with extended hours and self-booking systems.

Keywords: barriers, cervical, Papanicolaou, screening, smear

Procedia PDF Downloads 149
451 Feasibility of Voluntary Deep Inspiration Breath-Hold Radiotherapy Technique Implementation without Deep Inspiration Breath-Hold-Assisting Device

Authors: Auwal Abubakar, Shazril Imran Shaukat, Noor Khairiah A. Karim, Mohammed Zakir Kassim, Gokula Kumar Appalanaido, Hafiz Mohd Zin

Abstract:

Background: Voluntary deep inspiration breath-hold radiotherapy (vDIBH-RT) is an effective cardiac dose reduction technique during left breast radiotherapy. This study aimed to assess the accuracy of the implementation of the vDIBH technique among left breast cancer patients without the use of a special device such as a surface-guided imaging system. Methods: The vDIBH-RT technique was implemented among thirteen (13) left breast cancer patients at the Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia. Breath-hold monitoring was performed based on breath-hold skin marks and laser light congruence observed on zoomed CCTV images from the control console during each delivery. The initial setup was verified using cone beam computed tomography (CBCT) during breath-hold. Each field was delivered using multiple beam segments to allow a delivery time of 20 seconds, which can be tolerated by patients in breath-hold. The data were analysed using an in-house developed MATLAB algorithm. PTV margin was computed based on van Herk's margin recipe. Results: The setup error analysed from CBCT shows that the population systematic error in lateral (x), longitudinal (y), and vertical (z) axes was 2.28 mm, 3.35 mm, and 3.10 mm, respectively. Based on the CBCT image guidance, the Planning target volume (PTV) margin that would be required for vDIBH-RT using CCTV/Laser monitoring technique is 7.77 mm, 10.85 mm, and 10.93 mm in x, y, and z axes, respectively. Conclusion: It is feasible to safely implement vDIBH-RT among left breast cancer patients without special equipment. The breath-hold monitoring technique is cost-effective, radiation-free, easy to implement, and allows real-time breath-hold monitoring.

Keywords: vDIBH, cone beam computed tomography, radiotherapy, left breast cancer

Procedia PDF Downloads 57
450 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 134
449 Factors Contributing to Adverse Maternal and Fetal Outcome in Patients with Eclampsia

Authors: T. Pradhan, P. Rijal, M. C. Regmi

Abstract:

Background: Eclampsia is a multisystem disorder that involves vital organs and failure of these may lead to deterioration of maternal condition and hypoxia and acidosis of fetus resulting in high maternal and perinatal mortality and morbidity. Thus, evaluation of the contributing factors for this condition and its complications leading to maternal deaths should be the priority. Formulating the plan and protocol to decrease these losses should be our goal. Aims and Objectives: To evaluate the risk factors associated with adverse maternal and fetal outcome in patients with eclampsia and to correlate the risk factors associated with maternal and fetal morbidity and mortality. Methods: All patients with eclampsia admitted in Department of Obstetrics and Gynecology, B. P. Koirala Institute of Health Sciences were enrolled after informed consent from February 2013 to February 2014. Questions as per per-forma were asked to patients, and attendants like Antenatal clinic visits, parity, number of episodes of seizures, duration from onset of seizure to magnesium sulfate and the patients were followed as per the hospital protocol, the mode of delivery, outcome of baby, post partum maternal condition like maternal Intensive Care Unit admission, neurological impairment and mortality were noted before discharge. Statistical analysis was done using Statistical Package for the Social Sciences (SPSS 11). Mean and percentage were calculated for demographic variables. Pearson’s correlation test and chi-square test were applied to find the relation between the risk factors and the outcomes. P value less than 0.05 was considered significant. Results: There were 10,000 antenatal deliveries during the study period. Fifty-two patients with eclampsia were admitted. All of the patients were unbooked for our institute. Thirty-nine patients were antepartum eclampsia. Thirty-one patients required mechanical ventilator support. Twenty-four patients were delivered by emergency c-section and 21 babies were Low Birth Weight and there were 9 stillbirths. There was one maternal mortality and 45 patients were discharged with improvement but 3 patients had neurological impairment. Mortality was significantly related with number of seizure episodes and time interval between seizure onset and administration of magnesium sulphate. Conclusion: Early detection and management of hypertensive complicating pregnancy during antenatal clinic check up. Early hospitalization and management with magnesium sulphate for eclampsia can help to minimize the maternal and fetal adverse outcomes.

Keywords: eclampsia, maternal mortality, perinatal mortality, risk factors

Procedia PDF Downloads 169
448 Effects of Sacubitril and Valsartan on Gut Microbiome

Authors: Wei-Ju Huang, Hung-Pin Hsu

Abstract:

[Background] In congestive heart failure (CHF), it has always been the principle of clinical treatment to control the water retention mechanism in the body to prevent excessive fluid retention. Early control of sympathetic nerves, Renin-Angiotensin-Aldosterone system (RAA system, RAAS), or strengthening of Atrial Natriuretic Peptide (ANP) was the point. In RAA system, related hormones, such as angiotensin, or enzymes in the pathway, such as ACE-I, can be used with corresponding inhibitors to reduce water content.[Aim] In recent years, clinical studies have pointed out that if different mechanisms are combined, the control effect seems to be better. For example, recent studies showed that ENTRESTO, a combination of Sacubitril and Valsartan, is a good new drug for CHF. Sacubitril is a prodrug. After activation, it can inhibit neprilysin and act as a neprilysin inhibitor (ARNI) to reduce the breakdown of natriuretic peptides(ANP). Valsartan is a kind of angiotensin receptor blocker (ARB), both of which are used to treat heart failure at the same time, have excellent curative effects.[Materials and Methods] Considering the side effects of this drug, coughing and a few cases of diarrhea were observed. However, the effect of this drug on the patient's intestinal tract has not been confirmed. On the other hand, studies have pointed out that ANP supplement can improve the CHF and increase the inhibitory effect on cancer cells. Therefore, the purpose of this study is to use a special microbial detection method to prove that whether oral drugs have an effect on microorganisms.The experimental method uses Nissui Compact Dry to observe the situation in different types of microorganisms. After the drug is dissolved in water, it is implanted in a petri dish, and the presence of different microorganisms is detected through different antibody reactions to confirm whether the drug has some toxicology in the gut.[Results and Discussion]From the above experimental results, it can be known that among the effects of Sacubitril and Valsartan on the basic microbial flora of the human body, low doses had no significant effect on Escherichia coli or intestinal bacteria. If Sacubitril or Valsartan with a high concentration of 3mg/ml is used alone or under the stimulation of a high concentration of the two drugs, it has a significant inhibitory effect on Escherichia coli. However, in terms of the effect on intestinal bacteria, high concentration of Sacubitril has a more significant inhibitory effect on intestinal bacteria, while high concentration of Valsartan has a less significant inhibitory effect on intestinal bacteria. The inhibitory effect of the combination of the two drugs on intestinal bacteria is also less significant.[Conclusion]The results of this study can be used as a further reference for the possible side effects of the clinical use of Sacubitril and Valsartan on the intestinal tract of patients,

Keywords: sacubitril, valsartan, entresto, congestive heart failure (CHF)

Procedia PDF Downloads 67
447 Adolescent-Parent Relationship as the Most Important Factor in Preventing Mood Disorders in Adolescents: An Application of Artificial Intelligence to Social Studies

Authors: Elżbieta Turska

Abstract:

Introduction: One of the most difficult times in a person’s life is adolescence. The experiences in this period may shape the future life of this person to a large extent. This is the reason why many young people experience sadness, dejection, hopelessness, sense of worthlessness, as well as losing interest in various activities and social relationships, all of which are often classified as mood disorders. As many as 15-40% adolescents experience depressed moods and for most of them they resolve and are not carried into adulthood. However, (5-6%) of those affected by mood disorders develop the depressive syndrome and as many as (1-3%) develop full-blown clinical depression. Materials: A large questionnaire was given to 2508 students, aged 13–16 years old, and one of its parts was the Burns checklist, i.e. the standard test for identifying depressed mood. The questionnaire asked about many aspects of the student’s life, it included a total of 53 questions, most of which had subquestions. It is important to note that the data suffered from many problems, the most important of which were missing data and collinearity. Aim: In order to identify the correlates of mood disorders we built predictive models which were then trained and validated. Our aim was not to be able to predict which students suffer from mood disorders but rather to explore the factors influencing mood disorders. Methods: The problems with data described above practically excluded using all classical statistical methods. For this reason, we attempted to use the following Artificial Intelligence (AI) methods: classification trees with surrogate variables, random forests and xgboost. All analyses were carried out with the use of the mlr package for the R programming language. Resuts: The predictive model built by classification trees algorithm outperformed the other algorithms by a large margin. As a result, we were able to rank the variables (questions and subquestions from the questionnaire) from the most to least influential as far as protection against mood disorder is concerned. Thirteen out of twenty most important variables reflect the relationships with parents. This seems to be a really significant result both from the cognitive point of view and also from the practical point of view, i.e. as far as interventions to correct mood disorders are concerned.

Keywords: mood disorders, adolescents, family, artificial intelligence

Procedia PDF Downloads 101
446 Chemical, Physical and Microbiological Characteristics of a Texture-Modified Beef- Based 3D Printed Functional Product

Authors: Elvan G. Bulut, Betul Goksun, Tugba G. Gun, Ozge Sakiyan Demirkol, Kamuran Ayhan, Kezban Candogan

Abstract:

Dysphagia, difficulty in swallowing solid foods and thin liquids, is one of the common health threats among the elderly who require foods with modified texture in their diet. Although there are some commercial food formulations or hydrocolloids to thicken the liquid foods for dysphagic individuals, there is still a need for developing and offering new food products with enriched nutritional, textural and sensory characteristics to safely nourish these patients. 3D food printing is an appealing alternative in creating personalized foods for this purpose with attractive shape, soft and homogenous texture. In order to modify texture and prevent phase separation, hydrocolloids are generally used. In our laboratory, an optimized 3D printed beef-based formulation specifically for people with swallowing difficulties was developed based on the research project supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK Project # 218O017). The optimized formulation obtained from response surface methodology was 60% beef powder, 5.88% gelatin, and 0.74% kappa-carrageenan (all in a dry basis). This product was enriched with powders of freeze-dried beet, celery, and red capia pepper, butter, and whole milk. Proximate composition (moisture, fat, protein, and ash contents), pH value, CIE lightness (L*), redness (a*) and yellowness (b*), and color difference (ΔE*) values were determined. Counts of total mesophilic aerobic bacteria (TMAB), lactic acid bacteria (LAB), mold and yeast, total coliforms were conducted, and detection of coagulase positive S. aureus, E. coli, and Salmonella spp. were performed. The 3D printed products had 60.11% moisture, 16.51% fat, 13.68% protein, and 1.65% ash, and the pH value was 6.19, whereas the ΔE* value was 3.04. Counts of TMAB, LAB, mold and yeast and total coliforms before and after 3D printing were 5.23-5.41 log cfu/g, < 1 log cfu/g, < 1 log cfu/g, 2.39-2.15 log EMS/g, respectively. Coagulase positive S. aureus, E. coli, and Salmonella spp. were not detected in the products. The data obtained from this study based on determining some important product characteristics of functional beef-based formulation provides an encouraging basis for future research on the subject and should be useful in designing mass production of 3D printed products of similar composition.

Keywords: beef, dysphagia, product characteristics, texture-modified foods, 3D food printing

Procedia PDF Downloads 111
445 Poultry in Motion: Text Mining Social Media Data for Avian Influenza Surveillance in the UK

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Background: Avian influenza, more commonly known as Bird flu, is a viral zoonotic respiratory disease stemming from various species of poultry, including pets and migratory birds. Researchers have purported that the accessibility of health information online, in addition to the low-cost data collection methods the internet provides, has revolutionized the methods in which epidemiological and disease surveillance data is utilized. This paper examines the feasibility of using internet data sources, such as Twitter and livestock forums, for the early detection of the avian flu outbreak, through the use of text mining algorithms and social network analysis. Methods: Social media mining was conducted on Twitter between the period of 01/01/2021 to 31/12/2021 via the Twitter API in Python. The results were filtered firstly by hashtags (#avianflu, #birdflu), word occurrences (avian flu, bird flu, H5N1), and then refined further by location to include only those results from within the UK. Analysis was conducted on this text in a time-series manner to determine keyword frequencies and topic modeling to uncover insights in the text prior to a confirmed outbreak. Further analysis was performed by examining clinical signs (e.g., swollen head, blue comb, dullness) within the time series prior to the confirmed avian flu outbreak by the Animal and Plant Health Agency (APHA). Results: The increased search results in Google and avian flu-related tweets showed a correlation in time with the confirmed cases. Topic modeling uncovered clusters of word occurrences relating to livestock biosecurity, disposal of dead birds, and prevention measures. Conclusions: Text mining social media data can prove to be useful in relation to analysing discussed topics for epidemiological surveillance purposes, especially given the lack of applied research in the veterinary domain. The small sample size of tweets for certain weekly time periods makes it difficult to provide statistically plausible results, in addition to a great amount of textual noise in the data.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, avian influenza, social media

Procedia PDF Downloads 105
444 3-D Strain Imaging of Nanostructures Synthesized via CVD

Authors: Sohini Manna, Jong Woo Kim, Oleg Shpyrko, Eric E. Fullerton

Abstract:

CVD techniques have emerged as a promising approach in the formation of a broad range of nanostructured materials. The realization of many practical applications will require efficient and economical synthesis techniques that preferably avoid the need for templates or costly single-crystal substrates and also afford process adaptability. Towards this end, we have developed a single-step route for the reduction-type synthesis of nanostructured Ni materials using a thermal CVD method. By tuning the CVD growth parameters, we can synthesize morphologically dissimilar nanostructures including single-crystal cubes and Au nanostructures which form atop untreated amorphous SiO2||Si substrates. An understanding of the new properties that emerge in these nanostructures materials and their relationship to function will lead to for a broad range of magnetostrictive devices as well as other catalysis, fuel cell, sensor, and battery applications based on high-surface-area transition-metal nanostructures. We use coherent X-ray diffraction imaging technique to obtain 3-D image and strain maps of individual nanocrystals. Coherent x-ray diffractive imaging (CXDI) is a technique that provides the overall shape of a nanostructure and the lattice distortion based on the combination of highly brilliant coherent x-ray sources and phase retrieval algorithm. We observe a fine interplay of reduction of surface energy vs internal stress, which plays an important role in the morphology of nano-crystals. The strain distribution is influenced by the metal-substrate interface and metal-air interface, which arise due to differences in their thermal expansion. We find the lattice strain at the surface of the octahedral gold nanocrystal agrees well with the predictions of the Young-Laplace equation quantitatively, but exhibits a discrepancy near the nanocrystal-substrate interface resulting from the interface. The strain in the bottom side of the Ni nanocube, which is contacted on the substrate surface is compressive. This is caused by dissimilar thermal expansion coefficients between Ni nanocube and Si substrate. Research at UCSD support by NSF DMR Award # 1411335.

Keywords: CVD, nanostructures, strain, CXRD

Procedia PDF Downloads 392
443 A Framework Based Blockchain for the Development of a Social Economy Platform

Authors: Hasna Elalaoui Elabdallaoui, Abdelaziz Elfazziki, Mohamed Sadgal

Abstract:

Outlines: The social economy is a moral approach to solidarity applied to the projects’ development. To reconcile economic activity and social equity, crowdfunding is as an alternative means of financing social projects. Several collaborative blockchain platforms exist. It eliminates the need for a central authority or an inconsiderate middleman. Also, the costs for a successful crowdfunding campaign are reduced, since there is no commission to be paid to the intermediary. It improves the transparency of record keeping and delegates authority to authorities who may be prone to corruption. Objectives: The objectives are: to define a software infrastructure for projects’ participatory financing within a social and solidarity economy, allowing transparent, secure, and fair management and to have a financial mechanism that improves financial inclusion. Methodology: The proposed methodology is: crowdfunding platforms literature review, financing mechanisms literature review, requirements analysis and project definition, a business plan, Platform development process and implementation technology, and testing an MVP. Contributions: The solution consists of proposing a new approach to crowdfunding based on Islamic financing, which is the principle of Mousharaka inspired by Islamic financing, which presents a financial innovation that integrates ethics and the social dimension into contemporary banking practices. Conclusion: Crowdfunding platforms need to secure projects and allow only quality projects but also offer a wide range of options to funders. Thus, a framework based on blockchain technology and Islamic financing is proposed to manage this arbitration between quality and quantity of options. The proposed financing system, "Musharaka", is a mode of financing that prohibits interests and uncertainties. The implementation is offered on the secure Ethereum platform as investors sign and initiate transactions for contributions using their digital signature wallet managed by a cryptography algorithm and smart contracts. Our proposal is illustrated by a crop irrigation project in the Marrakech region.

Keywords: social economy, Musharaka, blockchain, smart contract, crowdfunding

Procedia PDF Downloads 77
442 Association between Organophosphate Pesticides Exposure and Cognitive Behavior in Taipei Children

Authors: Meng-Ying Chiu, Yu-Fang Huang, Pei-Wei Wang, Yi-Ru Wang, Yi-Shuan Shao, Mei-Lien Chen

Abstract:

Background: Organophosphate pesticides (OPs) are the most heavily used pesticides in agriculture in Taiwan. Therefore, they are commonly detected in general public including pregnant women and children. These compounds are proven endocrine disrupters that may affect the neural development in humans. The aim of this study is to assess the OPs exposure of children in 2 years of age and to examine the association between the exposure concentrations and neurodevelopmental effects in children. Methods: In a prospective cohort of 280 mother-child pairs, urine samples of prenatal and postnatal were collected from each participant and analyzed for metabolites of OPs by using gas chromatography-mass spectrometry. Six analytes were measured including dimethylphosphate (DMP), dimethylthiophosphate (DMTP), dimethyldithiophosphate (DMDTP), diethylphosphate (DEP), diethylthiophosphate (DETP), and diethyldithiophosphate (DEDTP). This study created a combined concentration measure for dimethyl compounds (DMs) consisting of the three dimethyl metabolites (DMP, DMTP, and DMDTP), for diethyl compounds (DEs) consisting of the three diethyl metabolites (DEP, DETP, and DEDTP) and six dialkyl phosphate (DAPs). The Bayley Scales of Infant and Toddler Development (Bayley-III) was used to assess children's cognitive behavior at 2 years old. The association between OPs exposure and Bayley-III scale score was determined by using the Mann-Whitney U test. Results: The measurements of urine samples are still on-going. This preliminary data are the report of 56 children aged 2 from the cohort. The detection rates for DMP, DMTP, DMDTP, DEP, DETP, and DEDTP are 80.4%, 69.6%, 64.3%, 64.3%, 62.5%, and 75%, respectively. After adjusting the creatinine concentrations of urine, the median (nmol/g creatinine) of urinary DMP, DMTP, DMDTP, DEP, DETP, DEDTP, DMs, DEs, and DAPs are 153.14, 53.32, 52.13, 19.24, 141.65, 192.17, 308.8, 311.6, and 702.11, respectively. The concentrations of urine are considerably higher than that in other countries. Children’s cognitive behavior was used three scales for Bayley-III, including cognitive, language and motor. In Mann-Whitney U test, the higher levels of DEs had significantly lower motor score (p=0.037), but no significant association was found between the OPs exposure levels and the score of either cognitive or language. Conclusion: The limited sample size suggests that Taipei children are commonly exposed to OPs and OPs exposure might affect the cognitive behavior of young children. This report will present more data to verify the results. The predictors of OPs concentrations, such as dietary pattern will also be included.

Keywords: biomonitoring, children, neurodevelopment, organophosphate pesticides exposure

Procedia PDF Downloads 141
441 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 271
440 The Importance of Efficient and Sustainable Water Resources Management and the Role of Artificial Intelligence in Preventing Forced Migration

Authors: Fateme Aysin Anka, Farzad Kiani

Abstract:

Forced migration is a situation in which people are forced to leave their homes against their will due to political conflicts, wars and conflicts, natural disasters, climate change, economic crises, or other emergencies. This type of migration takes place under conditions where people cannot lead a sustainable life due to reasons such as security, shelter and meeting their basic needs. This type of migration may occur in connection with different factors that affect people's living conditions. In addition to these general and widespread reasons, water security and resources will be one that is starting now and will be encountered more and more in the future. Forced migration may occur due to insufficient or depleted water resources in the areas where people live. In this case, people's living conditions become unsustainable, and they may have to go elsewhere, as they cannot obtain their basic needs, such as drinking water, water used for agriculture and industry. To cope with these situations, it is important to minimize the causes, as international organizations and societies must provide assistance (for example, humanitarian aid, shelter, medical support and education) and protection to address (or mitigate) this problem. From the international perspective, plans such as the Green New Deal (GND) and the European Green Deal (EGD) draw attention to the need for people to live equally in a cleaner and greener world. Especially recently, with the advancement of technology, science and methods have become more efficient. In this regard, in this article, a multidisciplinary case model is presented by reinforcing the water problem with an engineering approach within the framework of the social dimension. It is worth emphasizing that this problem is largely linked to climate change and the lack of a sustainable water management perspective. As a matter of fact, the United Nations Development Agency (UNDA) draws attention to this problem in its universally accepted sustainable development goals. Therefore, an artificial intelligence-based approach has been applied to solve this problem by focusing on the water management problem. The most general but also important aspect in the management of water resources is its correct consumption. In this context, the artificial intelligence-based system undertakes tasks such as water demand forecasting and distribution management, emergency and crisis management, water pollution detection and prevention, and maintenance and repair control and forecasting.

Keywords: water resource management, forced migration, multidisciplinary studies, artificial intelligence

Procedia PDF Downloads 87
439 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator

Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi

Abstract:

Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.

Keywords: equivalent doses, neutron contamination, neutron detector, photon energy

Procedia PDF Downloads 449
438 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception

Authors: Păcurar Diana Istina

Abstract:

The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.

Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception

Procedia PDF Downloads 90
437 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 59
436 Neuroevolution Based on Adaptive Ensembles of Biologically Inspired Optimization Algorithms Applied for Modeling a Chemical Engineering Process

Authors: Sabina-Adriana Floria, Marius Gavrilescu, Florin Leon, Silvia Curteanu, Costel Anton

Abstract:

Neuroevolution is a subfield of artificial intelligence used to solve various problems in different application areas. Specifically, neuroevolution is a technique that applies biologically inspired methods to generate neural network architectures and optimize their parameters automatically. In this paper, we use different biologically inspired optimization algorithms in an ensemble strategy with the aim of training multilayer perceptron neural networks, resulting in regression models used to simulate the industrial chemical process of obtaining bricks from silicone-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. In addition, the initial conditions that were taken into account during the design and commissioning of the installation can change over time, which leads to the need to add new mixes to adjust the operating conditions for the desired purpose, e.g., material properties and energy saving. The present approach follows the study by simulation of a process of obtaining bricks from silicone-based materials, i.e., the modeling and optimization of the process. Optimization aims to determine the working conditions that minimize the emissions represented by nitrogen monoxide. We first use a search procedure to find the best values for the parameters of various biologically inspired optimization algorithms. Then, we propose an adaptive ensemble strategy that uses only a subset of the best algorithms identified in the search stage. The adaptive ensemble strategy combines the results of selected algorithms and automatically assigns more processing capacity to the more efficient algorithms. Their efficiency may also vary at different stages of the optimization process. In a given ensemble iteration, the most efficient algorithms aim to maintain good convergence, while the less efficient algorithms can improve population diversity. The proposed adaptive ensemble strategy outperforms the individual optimizers and the non-adaptive ensemble strategy in convergence speed, and the obtained results provide lower error values.

Keywords: optimization, biologically inspired algorithm, neuroevolution, ensembles, bricks, emission minimization

Procedia PDF Downloads 116
435 Miniaturized PVC Sensors for Determination of Fe2+, Mn2+ and Zn2+ in Buffalo-Cows’ Cervical Mucus Samples

Authors: Ahmed S. Fayed, Umima M. Mansour

Abstract:

Three polyvinyl chloride membrane sensors were developed for the electrochemical evaluation of ferrous, manganese and zinc ions. The sensors were used for assaying metal ions in cervical mucus (CM) of Egyptian river buffalo-cows (Bubalus bubalis) as their levels vary dependent on cyclical hormone variation during different phases of estrus cycle. The presented sensors are based on using ionophores, β-cyclodextrin (β-CD), hydroxypropyl β-cyclodextrin (HP-β-CD) and sulfocalix-4-arene (SCAL) for sensors 1, 2 and 3 for Fe2+, Mn2+ and Zn2+, respectively. Dioctyl phthalate (DOP) was used as the plasticizer in a polymeric matrix of polyvinylchloride (PVC). For increasing the selectivity and sensitivity of the sensors, each sensor was enriched with a suitable complexing agent, which enhanced the sensor’s response. For sensor 1, β-CD was mixed with bathophenanthroline; for sensor 2, porphyrin was incorporated with HP-β-CD; while for sensor 3, oxine was the used complexing agent with SCAL. Linear responses of 10-7-10-2 M with cationic slopes of 53.46, 45.01 and 50.96 over pH range 4-8 were obtained using coated graphite sensors for ferrous, manganese and zinc ionic solutions, respectively. The three sensors were validated, according to the IUPAC guidelines. The obtained results by the presented potentiometric procedures were statistically analyzed and compared with those obtained by atomic absorption spectrophotometric method (AAS). No significant differences for either accuracy or precision were observed between the two techniques. Successful application for the determination of the three studied cations in CM, for the purpose to determine the proper time for artificial insemination (AI) was achieved. The results were compared with those obtained upon analyzing the samples by AAS. Proper detection of estrus and correct time of AI was necessary to maximize the production of buffaloes. In this experiment, 30 multi-parous buffalo-cows were in second to third lactation and weighting 415-530 kg, and were synchronized with OVSynch protocol. Samples were taken in three times around ovulation, on day 8 of OVSynch protocol, on day 9 (20 h before AI) and on day 10 (1 h before AI). Beside analysis of trace elements (Fe2+, Mn2+ and Zn2+) in CM using the three sensors, the samples were analyzed for the three cations and also Cu2+ by AAS in the CM samples and blood samples. The results obtained were correlated with hormonal analysis of serum samples and ultrasonography for the purpose of determining of the optimum time of AI. The results showed significant differences and powerful correlation with Zn2+ composition of CM during heat phase and the ovulation time, indicating that the parameter could be used as a tool to decide optimal time of AI in buffalo-cows.

Keywords: PVC Sensors, buffalo-cows, cyclodextrins, atomic absorption spectrophotometry, artificial insemination, OVSynch protocol

Procedia PDF Downloads 219
434 Psychological Consultation of Married Couples at Various Stages of Formation of the Young Family

Authors: Gulden Aykinbaeva, Assem Umirzakova, Assel Makhadiyeva

Abstract:

The problem of studying of young married couples in connection with a change of social institute of a family and marriage is represented very actual for family consultation, considering a family role in the development of modern society. Results of numerous researchs say that one of difficult in formation and stabilization of a matrimony is the period of a young family. This period is characterized by various processes of integration, adaptation and emotional compatibility of spouses. The young family in it the period endures the first standard crisis which postpones a print for the further development of the family scenario. Emergence new, earlier not existing, systems of values render a huge value on the process of formation of a young family and each of spouses separately. Possibly to solve the set family tasks at the development of the uniform system of the family relations in which socially mature persons capable to consider a family as the creativity of each other act as subjects. Due to the research objective in work the following techniques were used: a questionnaire of satisfaction with V. V. Stolin's marriage and A. N. Volkova's technique directed on detection of coherence of family values and role installations in a married couple, and also content – the analysis. Development of an internal basis of a family on mutual clearing of values is important during the work with married couples. 'The mature view' of the partner in the marriage union provides coherence between the expected and real behavior of the partner that is important for the realization of the purposes of adaptation in a family. For research of communication of the data obtained by means of A. N. Volkova's techniques, V. V. Stolina and content – the analysis, the correlation analysis, with the application of the criterion of Spirmen was used. The analysis of results of the conducted research allowed us to determine the number of consistent patterns: 1. Nature of change of satisfaction with marriage at spouses testifies that the matrimonial relations undergo high-quality changes at different stages of formation of a young family. 2. The matrimonial relations in the course of their development, formation and functioning in young marriage undergo considerable changes on psychological, social and psychological and insignificant — at the psychophysiological and sociocultural levels. The material received by us allows to plan ways of further detailed researches of the development of the matrimonial relations not only in the young marriage but also at further stages of development of a matrimony. We believe that the results received in this research can be almost applied at creation of algorithms of selection of marriage partners, at diagnostics of character and the maintenance of matrimonial disharmonies, at the forecast of stability of marriage and a family.

Keywords: married couples, formation of the young family, psychological consultation, matrimony

Procedia PDF Downloads 395
433 Analysis and Modeling of Graphene-Based Percolative Strain Sensor

Authors: Heming Yao

Abstract:

Graphene-based percolative strain gauges could find applications in many places such as touch panels, artificial skins or human motion detection because of its advantages over conventional strain gauges such as flexibility and transparency. These strain gauges rely on a novel sensing mechanism that depends on strain-induced morphology changes. Once a compression or tension strain is applied to Graphene-based percolative strain gauges, the overlap area between neighboring flakes becomes smaller or larger, which is reflected by the considerable change of resistance. Tiny strain change on graphene-based percolative strain sensor can act as an important leverage to tremendously increase resistance of strain sensor, which equipped graphene-based percolative strain gauges with higher gauge factor. Despite ongoing research in the underlying sensing mechanism and the limits of sensitivity, neither suitable understanding has been obtained of what intrinsic factors play the key role in adjust gauge factor, nor explanation on how the strain gauge sensitivity can be enhanced, which is undoubtedly considerably meaningful and provides guideline to design novel and easy-produced strain sensor with high gauge factor. We here simulated the strain process by modeling graphene flakes and its percolative networks. We constructed the 3D resistance network by simulating overlapping process of graphene flakes and interconnecting tremendous number of resistance elements which were obtained by fractionizing each piece of graphene. With strain increasing, the overlapping graphenes was dislocated on new stretched simulation graphene flake simulation film and a new simulation resistance network was formed with smaller flake number density. By solving the resistance network, we can get the resistance of simulation film under different strain. Furthermore, by simulation on possible variable parameters, such as out-of-plane resistance, in-plane resistance, flake size, we obtained the changing tendency of gauge factor with all these variable parameters. Compared with the experimental data, we verified the feasibility of our model and analysis. The increase of out-of-plane resistance of graphene flake and the initial resistance of sensor, based on flake network, both improved gauge factor of sensor, while the smaller graphene flake size gave greater gauge factor. This work can not only serve as a guideline to improve the sensitivity and applicability of graphene-based strain sensors in the future, but also provides method to find the limitation of gauge factor for strain sensor based on graphene flake. Besides, our method can be easily transferred to predict gauge factor of strain sensor based on other nano-structured transparent optical conductors, such as nanowire and carbon nanotube, or of their hybrid with graphene flakes.

Keywords: graphene, gauge factor, percolative transport, strain sensor

Procedia PDF Downloads 416
432 Filtering Momentum Life Cycles, Price Acceleration Signals and Trend Reversals for Stocks, Credit Derivatives and Bonds

Authors: Periklis Brakatsoulas

Abstract:

Recent empirical research shows a growing interest in investment decision-making under market anomalies that contradict the rational paradigm. Momentum is undoubtedly one of the most robust anomalies in the empirical asset pricing research and remains surprisingly lucrative ever since first documented. Although predominantly phenomena identified across equities, momentum premia are now evident across various asset classes. Yet few many attempts are made so far to provide traders a diversified portfolio of strategies across different assets and markets. Moreover, literature focuses on patterns from past returns rather than mechanisms to signal future price directions prior to momentum runs. The aim of this paper is to develop a diversified portfolio approach to price distortion signals using daily position data on stocks, credit derivatives, and bonds. An algorithm allocates assets periodically, and new investment tactics take over upon price momentum signals and across different ranking groups. We focus on momentum life cycles, trend reversals, and price acceleration signals. The main effort here concentrates on the density, time span and maturity of momentum phenomena to identify consistent patterns over time and measure the predictive power of buy-sell signals generated by these anomalies. To tackle this, we propose a two-stage modelling process. First, we generate forecasts on core macroeconomic drivers. Secondly, satellite models generate market risk forecasts using the core driver projections generated at the first stage as input. Moreover, using a combination of the ARFIMA and FIGARCH models, we examine the dependence of consecutive observations across time and portfolio assets since long memory behavior in volatilities of one market appears to trigger persistent volatility patterns across other markets. We believe that this is the first work that employs evidence of volatility transmissions among derivatives, equities, and bonds to identify momentum life cycle patterns.

Keywords: forecasting, long memory, momentum, returns

Procedia PDF Downloads 102
431 Geochemical Study of Natural Bitumen, Condensate and Gas Seeps from Sousse Area, Central Tunisia

Authors: Belhaj Mohamed, M. Saidi, N. Boucherab, N. Ouertani, I. Bouazizi, M. Ben Jrad

Abstract:

Natural hydrocarbon seepage has helped petroleum exploration as a direct indicator of gas and/or oil subsurface accumulations. Surface macro-seeps are generally an indication of a fault in an active Petroleum Seepage System belonging to a Total Petroleum System. This paper describes a case study in which multiple analytical techniques were used to identify and characterize trace petroleum-related hydrocarbons and other volatile organic compounds in groundwater samples collected from Sousse aquifer (Central Tunisia). The analytical techniques used for analyses of water samples included gas chromatography-mass spectrometry (GC-MS), capillary GC with flame-ionization detection, Compund Specific Isotope Analysis, Rock Eval Pyrolysis. The objective of the study was to confirm the presence of gasoline and other petroleum products or other volatile organic pollutants in those samples in order to assess the respective implication of each of the potentially responsible parties to the contamination of the aquifer. In addition, the degree of contamination at different depths in the aquifer was also of interest. The oil and gas seeps have been investigated using biomarker and stable carbon isotope analyses to perform oil-oil and oil-source rock correlations. The seepage gases are characterized by high CH4 content, very low δ13CCH4 values (-71,9 ‰) and high C1/C1–5 ratios (0.95–1.0), light deuterium–hydrogen isotope ratios (-198 ‰) and light δ13CC2 and δ13CCO2 values (-23,8‰ and-23,8‰ respectively) indicating a thermogenic origin with the contribution of the biogenic gas. An organic geochemistry study was carried out on the more ten oil seep samples. This study includes light hydrocarbon and biomarkers analyses (hopanes, steranes, n-alkanes, acyclic isoprenoids, and aromatic steroids) using GC and GC-MS. The studied samples show at least two distinct families, suggesting two different types of crude oil origins: the first oil seeps appears to be highly mature, showing evidence of chemical and/or biological degradation and was derived from a clay-rich source rock deposited in suboxic conditions. It has been sourced mainly by the lower Fahdene (Albian) source rocks. The second oil seeps was derived from a carbonate-rich source rock deposited in anoxic conditions, well correlated with the Bahloul (Cenomanian-Turonian) source rock.

Keywords: biomarkers, oil and gas seeps, organic geochemistry, source rock

Procedia PDF Downloads 443