Search results for: extraction tool
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6760

Search results for: extraction tool

5680 American Sign Language Recognition System

Authors: Rishabh Nagpal, Riya Uchagaonkar, Venkata Naga Narasimha Ashish Mernedi, Ahmed Hambaba

Abstract:

The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies.

Keywords: sign language, computer vision, vision transformer, VGG16, CNN

Procedia PDF Downloads 36
5679 An Automated Approach to Consolidate Galileo System Availability

Authors: Marie Bieber, Fabrice Cosson, Olivier Schmitt

Abstract:

Europe's Global Navigation Satellite System, Galileo, provides worldwide positioning and navigation services. The satellites in space are only one part of the Galileo system. An extensive ground infrastructure is essential to oversee the satellites and ensure accurate navigation signals. High reliability and availability of the entire Galileo system are crucial to continuously provide positioning information of high quality to users. Outages are tracked, and operational availability is regularly assessed. A highly flexible and adaptive tool has been developed to automate the Galileo system availability analysis. Not only does it enable a quick availability consolidation, but it also provides first steps towards improving the data quality of maintenance tickets used for the analysis. This includes data import and data preparation, with a focus on processing strings used for classification and identifying faulty data. Furthermore, the tool allows to handle a low amount of data, which is a major constraint when the aim is to provide accurate statistics.

Keywords: availability, data quality, system performance, Galileo, aerospace

Procedia PDF Downloads 162
5678 Fault Tree Analysis and Bayesian Network for Fire and Explosion of Crude Oil Tanks: Case Study

Authors: B. Zerouali, M. Kara, B. Hamaidi, H. Mahdjoub, S. Rouabhia

Abstract:

In this paper, a safety analysis for crude oil tanks to prevent undesirable events that may cause catastrophic accidents. The estimation of the probability of damage to industrial systems is carried out through a series of steps, and in accordance with a specific methodology. In this context, this work involves developing an assessment tool and risk analysis at the level of crude oil tanks system, based primarily on identification of various potential causes of crude oil tanks fire and explosion by the use of Fault Tree Analysis (FTA), then improved risk modelling by Bayesian Networks (BNs). Bayesian approach in the evaluation of failure and quantification of risks is a dynamic analysis approach. For this reason, have been selected as an analytical tool in this study. Research concludes that the Bayesian networks have a distinct and effective method in the safety analysis because of the flexibility of its structure; it is suitable for a wide variety of accident scenarios.

Keywords: bayesian networks, crude oil tank, fault tree, prediction, safety

Procedia PDF Downloads 655
5677 Cytotoxic Activity against MCF-7 Breast Cancer Cells and Antioxidant Property of Aqueous Tempe Extracts from Extended Fermentation

Authors: Zatil Athaillah, Anastasia Devi, Dian Muzdalifah, Wirasuwasti Nugrahani, Linar Udin

Abstract:

During tempe fermentation, some chemical changes occurred and they contributed to sensory, appearance, and health benefits of soybeans. Many studies on health properties of tempe have specialized on their isoflavones. In this study, other components of tempe, particularly water soluble chemicals, was investigated for their biofunctionality. The study was focused on the ability to suppress MCF-7 breast cancer cell growth and antioxidant activity, as expressed by DPPH radical scavenging activity, total phenols and total flavonoids, of the water extracts. Fermentation time of tempe was extended up to 120 hr to increase the possibility to find the functional components. Extraction yield and soluble nitrogen content were also quantified as accompanying data. Our findings suggested that yield of water extraction of tempe increased as fermentation was extended up to 120 hr, except for a slight decrease at 72 hr. Water extracts of tempe showed inhibition of MCF-7 breast cancer cell growth, as shown by lower IC50 values when compared to control (unfermented soybeans). Among the varied fermentation timescales, 60-hr period showed the highest activity (IC50 of 8.7 ± 4.95 µg/ml). The anticancer activity of extracts obtained from different fermentation time was positively correlated with total soluble nitrogens, but less relevant with antioxidant data. During 48-72 hr fermentation, at which cancer suppression activity was significant, the antioxidant properties from the three assays were not higher than control. These findings indicated that water extracts of tempe from extended fermentation could inhibit breast cancer cell growth but further study to determine the mechanism and compounds that play important role in the activity should be conducted.

Keywords: tempe, anticancer, antioxidant, phenolic compounds

Procedia PDF Downloads 240
5676 Effects of Drying and Extraction Techniques on the Profile of Volatile Compounds in Banana Pseudostem

Authors: Pantea Salehizadeh, Martin P. Bucknall, Robert Driscoll, Jayashree Arcot, George Srzednicki

Abstract:

Banana is one of the most important crops produced in large quantities in tropical and sub-tropical countries. Of the total plant material grown, approximately 40% is considered waste and left in the field to decay. This practice allows fungal diseases such as Sigatoka Leaf Spot to develop, limiting plant growth and spreading spores in the air that can cause respiratory problems in the surrounding population. The pseudostem is considered a waste residue of production (60 to 80 tonnes/ha/year), although it is a good source of dietary fiber and volatile organic compounds (VOC’s). Strategies to process banana pseudostem into palatable, nutritious and marketable food materials could provide significant social and economic benefits. Extraction of VOC’s with desirable odor from dried and fresh pseudostem could improve the smell of products from the confectionary and bakery industries. Incorporation of banana pseudostem flour into bakery products could provide cost savings and improve nutritional value. The aim of this study was to determine the effects of drying methods and different banana species on the profile of volatile aroma compounds in dried banana pseudostem. The banana species analyzed were Musa acuminata and Musa balbisiana. Fresh banana pseudostem samples were processed by either freeze-drying (FD) or heat pump drying (HPD). The extraction of VOC’s was performed at ambient temperature using vacuum distillation and the resulting, mostly aqueous, distillates were analyzed using headspace solid phase microextraction (SPME) gas chromatography – mass spectrometry (GC-MS). Optimal SPME adsorption conditions were 50 °C for 60 min using a Supelco 65 μm PDMS/DVB Stableflex fiber1. Compounds were identified by comparison of their electron impact mass spectra with those from the Wiley 9 / NIST 2011 combined mass spectral library. The results showed that the two species have notably different VOC profiles. Both species contained VOC’s that have been established in literature to have pleasant appetizing aromas. These included l-Menthone, D-Limonene, trans-linlool oxide, 1-Nonanol, CIS 6 Nonen-1ol, 2,6 Nonadien-1-ol, Benzenemethanol, 4-methyl, 1-Butanol, 3-methyl, hexanal, 1-Propanol, 2-methyl- acid، 2-Methyl-2-butanol. Results show banana pseudostem VOC’s are better preserved by FD than by HPD. This study is still in progress and should lead to the optimization of processing techniques that would promote the utilization of banana pseudostem in the food industry.

Keywords: heat pump drying, freeze drying, SPME, vacuum distillation, VOC analysis

Procedia PDF Downloads 327
5675 Metropolitan Governance in Statutory Plan Making Process

Authors: Vibhore Bakshi

Abstract:

This research paper is a step towards understanding the role of governance in the plan preparation process. It addresses the complexities of the peri-urban, historical constructions, politics and policies of sustainability, and legislative frameworks. The paper reflects on the Delhi NCT as one of the classical cases that have happened to witness different structural changes in the master plan around 1981, 2001, 2021, and Proposed Draft 2041. The Delhi Landsat imageries for 1989 and 2018 show an increase in the built-up areas around the periphery of NCT. The peri-urbanization has been a result of increasing in-migration to peri–urban areas of Delhi. The built-up extraction for years 1981, 1991, 2001, 2011, and 2018 highlights the growing peri-urbanization on scarce land therefore, it becomes equally important to research the history of the land and its legislative measures. It is interesting to understand the streaks of changes that have occurred in the land of Delhi in accordance with the different master plans and land legislative policies. The process of masterplan process in Delhi has experienced a lot of complexities in juxtaposition to other metropolitan regions of the world. The paper identifies the shortcomings in the current master planning process approach in regard to the stage of the planning process, traditional planning approach, and lagging ICT-based interventions. The metropolitan governance systems across the globe and India depict diversity in the organizational setup and varied dissemination of functions. It addresses the complexity of the peri-urban, historical constructions, politics and policies of sustainability, and legislative frameworks.

Keywords: governance, land provisions, built-up areas, in migration, built up extraction, master planning process, legislative policies, metropolitan governance systems

Procedia PDF Downloads 168
5674 Extraction of Dye from Coconut Husk and Its Application on Wool and Silk

Authors: Deepali Rastogi

Abstract:

Natural dyes are considered to be eco-friendly as they cause no pollution and are safe to use. With the growing interest in natural dyes, new sources of natural dyes are being explored. Coconut (Cocos nucifera) is native to tropical eastern region. It is abundantly available in Asia, Africa and South America. While coconut has tremendous commercial value in food, oil, pharmaceutical and cosmetic industry, the most important use of coconut husk has been as coir which is used for making mats, ropes, etc. In the present study an attempt has been made to extract dye from the coconut husk and study its application on wool and silk. Dye was extracted from coconut husk in an aqueous medium at three different pH. The coconut husk fibres were boiled in water at different pH of 4, 7 and 9 for one hour. On visual inspection of the extracted dye solution, maximum colour was found to be extracted at pH 9. The solution was obtained in neutral medium whereas, no dye was extracted in acidic medium. Therefore, alkaline medium at pH 9 was selected for the extraction of dye from coconut husk. The extracted dye was applied on wool and silk at three different pH, viz., 4, 7 and 9. The effect of pre- and post- mordanting with alum and ferrous sulphate on the colour value of coconut husk dye was also studied. The L*a*b*/L*c*h* values were measured to see the effect of the mordants on the colour values of all the dyed and mordanted samples. Bright golden brown to dark brown colours were obtained at pH 4 on both wool and silk. The colour yield was not very good at pH 7 and 9. Mordanting with alum resulted in darker and brighter shades of brown, whereas mordanting with ferrous sulphate resulted in darker and duller shades. All the samples were tested for colourfastness to light, rubbing, washing and perspiration. Both wool and silk dyed with dye extracted from coconut husk exhibited good to excellent wash, rub and perspiration fastness. Fastness to light was moderate to good.

Keywords: coconut husk, wool, silk, natural dye, mordants

Procedia PDF Downloads 424
5673 Experimental Investigation of Physical Properties of Bambusa Oldhamii and Yushania Alpina on the Influence of Age and Harvesting Season

Authors: Tigist Girma Kedane

Abstract:

The purpose of the current research work is to measure the physical properties of bamboo species in Ethiopia on the impact of age, harvesting seasons and culm height. Three representatives of bamboo plants are harvested in three groups of ages, 2 harvesting months, and 3 regions of Ethiopia. Research has not been done on the physical properties of bamboo species in Ethiopia so far. Moisture content and shrinkage of bamboo culm increase when the culm ages younger and moves from top to bottom position. The harvesting month of November has a higher moisture content and shrinkage compared to February. One year old of Injibara, Kombolcha, and Mekaneselam bamboo culm has 40%, 30%, and 33% higher moisture content, 29%, 24%, and 28% higher radial shrinkage, 32%, 37%, and 32% higher tangential shrinkage compared to 3 years old respectively. The bottom position of Injibara, Kombolcha, and Mekaneselam in November have 45%, 28%, and 25% higher moisture content, 41%, 29%, and 34% radial shrinkage, 29%, 28%, and 42% tangential shrinkage than the top position, respectively. The basic density increases as the age of the bamboo becomes older and moves from the bottom to the top position. November has the lowest basic density compared to February. 3 years old of Injibara, Kombolcha, and Mekaneselam at the age of 3 years have 32%, 50%, and 24% higher basic density compared to 1 year, whereas the top position has 35%, 26%, and 22% higher than the bottom position in February, respectively. The current research proposed that 3 years and February are suited for structural purposes and furniture making, but 1 year and November are suited for fiber extraction in the composite industry. The existence of water in the culm improves an easy extraction of the fibers without damage from the culm.

Keywords: bamboo age, bamboo height, harvesting seasons, physical properties

Procedia PDF Downloads 58
5672 Qualitative Profiling Model and Competencies Evaluation to Fighting Unemployment

Authors: Francesca Carta, Giovanna Linfante, Laura Agneni, Debora Radicchia, Camilla Micheletta, Angelo Del Cimmuto

Abstract:

Overtaking competence mismatches and fostering career pathways congruent with the individual skills profile would significantly contribute to fighting unemployment. The aim of this paper is to examine the usefulness and efficiency of qualitative tools in supporting and improving the quality of caseworkers’ activities during the jobseekers’ profile analysis and career guidance process. The selected target groups are long-term and middle term unemployed, job seekers, young people at the end of the vocational training pathway and unemployed woman with social disadvantages. The experimentation is conducted in Italy at public employment services in 2017. In the framework of Italian labour market reform, the experimentation represents the first step to develop a customized qualitative model profiling; the final general object is to improve the public employment services quality. The experimentation tests the transferability of an OECD self-assessment competences tool in the Italian public employment services. On one hand, the first analysis results will indicate the user’s perception concerning the tool’s application and their different competence levels (literacy, numeracy, problem solving, career interest, subjective well-being and health, behavioural competencies) with reference to the specific target. On the other hand, the experimentation outcomes will show caseworkers understanding regarding the tool’s usability and efficiency for career guidance and reskilling and upskilling programs.

Keywords: career guidance, evaluation competences, reskilling pathway, unemployment

Procedia PDF Downloads 316
5671 Developing an Automated Protocol for the Wristband Extraction Process Using Opentrons

Authors: Tei Kim, Brooklynn McNeil, Kathryn Dunn, Douglas I. Walker

Abstract:

To better characterize the relationship between complex chemical exposures and disease, our laboratory uses an approach that combines low-cost, polydimethylsiloxane (silicone) wristband samplers that absorb many of the chemicals we are exposed to with untargeted high-resolution mass spectrometry (HRMS) to characterize 1000’s of chemicals at a time. In studies with human populations, these wristbands can provide an important measure of our environment: however, there is a need to use this approach in large cohorts to study exposures associated with the disease. To facilitate the use of silicone samplers in large scale population studies, the goal of this research project was to establish automated sample preparation methods that improve throughput, robustness, and scalability of analytical methods for silicone wristbands. Using the Opentron OT2 automated liquid platform, which provides a low-cost and opensource framework for automated pipetting, we created two separate workflows that translate the manual wristband preparation method to a fully automated protocol that requires minor intervention by the operator. These protocols include a sequence generation step, which defines the location of all plates and labware according to user-specified settings, and a transfer protocol that includes all necessary instrument parameters and instructions for automated solvent extraction of wristband samplers. These protocols were written in Python and uploaded to GitHub for use by others in the research community. Results from this project show it is possible to establish automated and open source methods for the preparation of silicone wristband samplers to support profiling of many environmental exposures. Ongoing studies include deployment in longitudinal cohort studies to investigate the relationship between personal chemical exposure and disease.

Keywords: bioinformatics, automation, opentrons, research

Procedia PDF Downloads 107
5670 Analysis of Real Time Seismic Signal Dataset Using Machine Learning

Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.

Abstract:

Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.

Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection

Procedia PDF Downloads 121
5669 Recent Advancement in Fetal Electrocardiogram Extraction

Authors: Savita, Anurag Sharma, Harsukhpreet Singh

Abstract:

Fetal Electrocardiogram (fECG) is a widely used technique to assess the fetal well-being and identify any changes that might be with problems during pregnancy and to evaluate the health and conditions of the fetus. Various techniques or methods have been employed to diagnose the fECG from abdominal signal. This paper describes the facile approach for the estimation of the fECG known as Adaptive Comb. Filter (ACF). The ACF can adjust according to the temporal variations in fundamental frequency by itself that used for the estimation of the quasi periodic signal of ECG signal.

Keywords: aECG, ACF, fECG, mECG

Procedia PDF Downloads 405
5668 Optimization Model for Support Decision for Maximizing Production of Mixed Fresh Fruit Farms

Authors: Andrés I. Ávila, Patricia Aros, César San Martín, Elizabeth Kehr, Yovana Leal

Abstract:

Planning models for fresh products is a very useful tool for improving the net profits. To get an efficient supply chain model, several functions should be considered to get a complete simulation of several operational units. We consider a linear programming model to help farmers to decide if it is convenient to choose what area should be planted for three kinds of export fruits considering their future investment. We consider area, investment, water, productivity minimal unit, and harvest restrictions to develop a monthly based model to compute the average income in five years. Also, conditions on the field as area, water availability, and initial investment are required. Using the Chilean costs and dollar-peso exchange rate, we can simulate several scenarios to understand the possible risks associated to this market. Also, this tool help to support decisions for government and individual farmers.

Keywords: mixed integer problem, fresh fruit production, support decision model, agricultural and biosystems engineering

Procedia PDF Downloads 435
5667 Beneficiation of Low Grade Chromite Ore and Its Characterization for the Formation of Magnesia-Chromite Refractory by Economically Viable Process

Authors: Amit Kumar Bhandary, Prithviraj Gupta, Siddhartha Mukherjee, Mahua Ghosh Chaudhuri, Rajib Dey

Abstract:

Chromite ores are primarily used for extraction of chromium, which is an expensive metal. For low grade chromite ores (containing less than 40% Cr2O3), the chromium extraction is not usually economically viable. India possesses huge quantities of low grade chromite reserves. This deposit can be utilized after proper physical beneficiation. Magnetic separation techniques may be useful after reduction for the beneficiation of low grade chromite ore. The sample collected from the sukinda mines is characterized by XRD which shows predominant phases like maghemite, chromite, silica, magnesia and alumina. The raw ore is crushed and ground to below 75 micrometer size. The microstructure of the ore shows that the chromite grains surrounded by a silicate matrix and porosity observed the exposed side of the chromite ore. However, this ore may be utilized in refractory applications. Chromite ores contain Cr2O3, FeO, Al2O3 and other oxides like Fe-Cr, Mg-Cr have a high tendency to form spinel compounds, which usually show high refractoriness. Initially, the low grade chromite ore (containing 34.8% Cr2O3) was reduced at 1200 0C for 80 minutes with 30% coke fines by weight, before being subjected to magnetic separation. The reduction by coke leads to conversion of higher state of iron oxides converted to lower state of iron oxides. The pre-reduced samples are then characterized by XRD. The magnetically inert mass was then reacted with 20% MgO by weight at 1450 0C for 2 hours. The resultant product was then tested for various refractoriness parameters like apparent porosity, slag resistance etc. The results were satisfactory, indicating that the resultant spinel compounds are suitable for refractory applications for elevated temperature processes.

Keywords: apparent porosity, beneficiation, low-grade chromite, refractory, spinel compounds, slag resistance

Procedia PDF Downloads 382
5666 A Comprehensive Methodology for Voice Segmentation of Large Sets of Speech Files Recorded in Naturalistic Environments

Authors: Ana Londral, Burcu Demiray, Marcus Cheetham

Abstract:

Speech recording is a methodology used in many different studies related to cognitive and behaviour research. Modern advances in digital equipment brought the possibility of continuously recording hours of speech in naturalistic environments and building rich sets of sound files. Speech analysis can then extract from these files multiple features for different scopes of research in Language and Communication. However, tools for analysing a large set of sound files and automatically extract relevant features from these files are often inaccessible to researchers that are not familiar with programming languages. Manual analysis is a common alternative, with a high time and efficiency cost. In the analysis of long sound files, the first step is the voice segmentation, i.e. to detect and label segments containing speech. We present a comprehensive methodology aiming to support researchers on voice segmentation, as the first step for data analysis of a big set of sound files. Praat, an open source software, is suggested as a tool to run a voice detection algorithm, label segments and files and extract other quantitative features on a structure of folders containing a large number of sound files. We present the validation of our methodology with a set of 5000 sound files that were collected in the daily life of a group of voluntary participants with age over 65. A smartphone device was used to collect sound using the Electronically Activated Recorder (EAR): an app programmed to record 30-second sound samples that were randomly distributed throughout the day. Results demonstrated that automatic segmentation and labelling of files containing speech segments was 74% faster when compared to a manual analysis performed with two independent coders. Furthermore, the methodology presented allows manual adjustments of voiced segments with visualisation of the sound signal and the automatic extraction of quantitative information on speech. In conclusion, we propose a comprehensive methodology for voice segmentation, to be used by researchers that have to work with large sets of sound files and are not familiar with programming tools.

Keywords: automatic speech analysis, behavior analysis, naturalistic environments, voice segmentation

Procedia PDF Downloads 279
5665 A Geoprocessing Tool for Early Civil Work Notification to Optimize Fiber Optic Cable Installation Cost

Authors: Hussain Adnan Alsalman, Khalid Alhajri, Humoud Alrashidi, Abdulkareem Almakrami, Badie Alguwaisem, Said Alshahrani, Abdullah Alrowaished

Abstract:

Most of the cost of installing a new fiber optic cable is attributed to civil work-trenching-cost. In many cases, information technology departments receive project proposals in their eReview system, but not all projects are visible to everyone. Additionally, if there was no IT scope in the proposed project, it is not likely to be visible to IT. Sometimes it is too late to add IT scope after project budgets have been finalized. Finally, the eReview system is a repository of PDF files for each project, which commits the reviewer to manual work and limits automation potential. This paper details a solution to address the late notification of the eReview system by integrating IT Sites GIS data-sites locations-with land use permit (LUP) data-civil work activity, which is the first step before securing the required land usage authorizations and means no detailed designs for any relevant project before an approved LUP request. To address the manual nature of eReview system, both the LUP System and IT data are using ArcGIS Desktop, which enables the creation of a geoprocessing tool with either Python or Model Builder to automate finding and evaluating potentially usable LUP requests to reduce trenching between two sites in need of a new FOC. To achieve this, a weekly dump was taken from LUP system production data and loaded manually onto ArcMap Desktop. Then a custom tool was developed in model builder, which consisted of a table of two columns containing all the pairs of sites in need of new fiber connectivity. The tool then iterates all rows of this table, taking the sites’ pair one at a time and finding potential LUPs between them, which satisfies the provided search radius. If a group of LUPs was found, an iterator would go through each LUP to find the required civil work between the two sites and the LUP Polyline feature and the distance through the line, which would be counted as cost avoidance if an IT scope had been added. Finally, the tool will export an Excel file named with sites pair, and it will contain as many rows as the number of LUPs, which met the search radius containing trenching and pulling information and cost. As a result, multiple projects have been identified – historical, missed opportunity, and proposed projects. For the proposed project, the savings were about 75% ($750,000) to install a new fiber with the Euclidean distance between Abqaiq GOSP2 and GOSP3 DCOs. In conclusion, the current tool setup identifies opportunities to bundle civil work on single projects at a time and between two sites. More work is needed to allow the bundling of multiple projects between two sites to achieve even more cost avoidance in both capital cost and carbon footprint.

Keywords: GIS, fiber optic cable installation optimization, eliminate redundant civil work, reduce carbon footprint for fiber optic cable installation

Procedia PDF Downloads 216
5664 An Analysis of Eco-efficiency and GHG Emission of Olive Oil Production in Northeast of Portugal

Authors: M. Feliciano, F. Maia, A. Gonçalves

Abstract:

Olive oil production sector plays an important role in Portuguese economy. It had a major growth over the last decade, increasing its weight in the overall national exports. International market penetration for Mediterranean traditional products is increasingly more demanding, especially in the Northern European markets, where consumers are looking for more sustainable products. Trying to support this growing demand this study addresses olive oil production under the environmental and eco-efficiency perspectives. The analysis considers two consecutive product life cycle stages: olive trees farming; and olive oil extraction in mills. Addressing olive farming, data collection covered two different organizations: a middle-size farm (~12ha) (F1) and a large-size farm (~100ha) (F2). Results from both farms show that olive collection activities are responsible for the largest amounts of Green House Gases (GHG) emissions. In this activities, estimate for the Carbon Footprint per olive was higher in F2 (188g CO2e/kgolive) than in F1 (148g CO2e/kgolive). Considering olive oil extraction, two different mills were considered: one using a two-phase system (2P) and other with a three-phase system (3P). Results from the study of two mills show that there is a much higher use of water in 3P. Energy intensity (EI) is similar in both mills. When evaluating the GHG generated, two conditions are evaluated: a biomass neutral condition resulting on a carbon footprint higher in 3P (184g CO2e/Lolive oil) than in 2P (92g CO2e/Lolive oil); and a non-neutral biomass condition in which 2P increase its carbon footprint to 273g CO2e/Lolive oil. When addressing the carbon footprint of possible combinations among studied subsystems, results suggest that olive harvesting is the major source for GHG.

Keywords: carbon footprint, environmental indicators, farming subsystem, industrial subsystem, olive oil

Procedia PDF Downloads 283
5663 Hot Face of Cold War: 007 James Bond

Authors: Günevi Uslu Evren

Abstract:

Propaganda is one of the most effective methods for changing individual and mass opinions. Propaganda tries to get the message across to people or masses to effect rather than to provide objective information. There are many types of propaganda. Especially, political propaganda is a very powerful method that is used by states during in both war and peace. The aim of this method is to create a reaction against them by showing within the framework of internal and external enemies. Propaganda can be practiced by many different methods. Especially during the Cold War Era, the US and USSR have tried to create an ideological effect by using the mass media intensively. Cinema, which is located at the beginning of these methods, is the most powerful weapon to influence the masses. In this study, the historical process of the Cold War is examined. Especially, these propagandas that had been used by United States and The Soviet Union were investigated. The purposes of propaganda and construction methods were presented. Cold War events and relations between the US and the USSR during the Cold War will be discussed. Outlooks of two countries to each other during the Cold War, propaganda techniques used defectively during Cold War and how to use the cinema as a propaganda tool will be examined. The film "From Russia with Love, James Bond 007" that was filmed in Cold War were examined to explain how cinema was used as a propaganda tool in this context.

Keywords: cinema, cold war, James Bond, propaganda

Procedia PDF Downloads 516
5662 The Combination Of Aortic Dissection Detection Risk Score (ADD-RS) With D-dimer As A Diagnostic Tool To Exclude The Diagnosis Of Acute Aortic Syndrome (AAS)

Authors: Mohamed Hamada Abdelkader Fayed

Abstract:

Background: To evaluate the diagnostic accuracy of (ADD-RS) with D-dimer as a screening test to exclude AAS. Methods: We conducted research for the studies examining the diagnostic accuracy of (ADD- RS)+ D-dimer to exclude the diagnosis of AAS, We searched MEDLINE, Embase, and Cochrane of Trials up to 31 December 2020. Results: We identified 3 studies using (ADD-RS) with D-dimer as a diagnostic tool for AAS, involving 3261 patients were AAS was diagnosed in 559(17.14%) patients. Overall results showed that the pooled sensitivities were 97.6 (95% CI 0.95.6, 99.6) at (ADD-RS)≤1(low risk group) with D-dimer and 97.4(95% CI 0.95.4,, 99.4) at (ADD-RS)>1(High risk group) with D-dimer., the failure rate was 0.48% at low risk group and 4.3% at high risk group respectively. Conclusions: (ADD-RS) with D-dimer was a useful screening test with high sensitivity to exclude Acute Aortic Syndrome.

Keywords: aortic dissection detection risk score, D-dimer, acute aortic syndrome, diagnostic accuracy

Procedia PDF Downloads 212
5661 A Critical Look on Clustered Regularly Interspaced Short Palindromic Repeats Method Based on Different Mechanisms

Authors: R. Sulakshana, R. Lakshmi

Abstract:

Clustered Regularly Interspaced Short Palindromic Repeats, CRISPR associate (CRISPR/Cas) is an adaptive immunity system found in bacteria and archaea. It has been modified to serve as a potent gene editing tool. Moreover, it has found widespread use in the field of genome research because of its accessibility and low cost. Several bioinformatics methods have been created to aid in the construction of specific single guide RNA (sgRNA), which is highly active and crucial to CRISPR/Cas performance. Various Cas proteins, including Cas1, Cas2, Cas9, and Cas12, have been used to create genome engineering tools because of their programmable sequence specificity. Class 1 and 2 CRISPR/Cas systems, as well as the processes of all known Cas proteins (including Cas9 and Cas12), are discussed in this review paper. In addition, the various CRISPR methodologies and their tools so far discovered are discussed. Finally, the challenges and issues in the CRISPR system along with future works, are presented.

Keywords: gene editing tool, Cas proteins, CRISPR, guideRNA, programmable sequence

Procedia PDF Downloads 103
5660 Effectiveness of Technology Enhanced Learning in Orthodontic Teaching

Authors: Mohammed Shaath

Abstract:

Aims Technological advancements in teaching and learning have made significant improvements over the past decade and have been incorporated in institutions to aid the learner’s experience. This review aims to assess whether Technology Enhanced Learning (TEL) pedagogy is more effective at improving students’ attitude and knowledge retention in orthodontic training than traditional methods. Methodology The searches comprised Systematic Reviews (SRs) related to the comparison of TEL and traditional teaching methods from the following databases: PubMed, SCOPUS, Medline, and Embase. One researcher performed the screening, data extraction, and analysis and assessed the risk of bias and quality using A Measurement Tool to Assess Systematic Reviews 2 (AMSTAR-2). Kirkpatrick’s 4-level evaluation model was used to evaluate the educational values. Results A sum of 34 SRs was identified after the removal of duplications and irrelevant SRs; 4 fit the inclusion criteria. On Level 1, students showed positivity to TEL methods, although acknowledging that the harder the platforms to use, the less favourable. Nonetheless, the students still showed high levels of acceptability. Level 2 showed there is no significant overall advantage of increased knowledge when it comes to TEL methods. One SR showed that certain aspects of study within orthodontics deliver a statistical improvement with TEL. Level 3 was the least reported on. Results showed that if left without time restrictions, TEL methods may be advantageous. Level 4 shows that both methods are equally as effective, but TEL has the potential to overtake traditional methods in the future as a form of active, student-centered approach. Conclusion TEL has a high level of acceptability and potential to improve learning in orthodontics. Current reviews have potential to be improved, but the biggest aspect that needs to be addressed is the primary study, which shows a lower level of evidence and heterogeneity in their results. As it stands, the replacement of traditional methods with TEL cannot be fully supported in an evidence-based manner. The potential of TEL methods has been recognized and is already starting to show some evidence of the ability to be more effective in some aspects of learning to cater for a more technology savvy generation.

Keywords: TEL, orthodontic, teaching, traditional

Procedia PDF Downloads 41
5659 Quality Assurance as an Educational Development Tool: Case from the European Higher Education

Authors: Maha Mourad

Abstract:

Higher education in any competitive European economy should serve the new information society by increasing the supply of good quality education services and by creating good international brands in the international higher education market. Hence, continuous risk management techniques through higher educational reforms programs became one of the top priorities within the European Union to control the quality of higher education. Risk is higher education is studies by several researchers who agreed that the risk in higher education has a direct influence on continuity of quality education and research contribution. The focus of this research is to highlights the Internal Quality Assurance (IQA) activities in the Polish higher education system as a risk management tool used to control the quality of education. This paper presents a qualitative empirical analysis in 5 different universities in Poland. In addition, it aims to help in finding global practical and create benchmark for policy makers concerning the risk management techniques based on the Polish experience.

Keywords: education development, quality assurance, sustainability, european higher education

Procedia PDF Downloads 463
5658 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection

Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra

Abstract:

In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.

Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging

Procedia PDF Downloads 83
5657 Native Speaker's Role in Improving the Speaking Skills of Second Language Learners

Authors: May George

Abstract:

Native speakers can play a significant role in improving second language learners speaking skills through weekly interaction. Speaking is one of the important skills that second language learners need to practice in order to be able to communicate the language. This study will examine Talkaboard as an important tool to achieve better outcomes in speaking a language. The subject of the study will be 16 advanced Arabic language learners at the college level. There will be a pre-test and post-test to examine the conversation outcomes using the Talkaborad tool. The students will be asked to write a summary and talk about their weekly conversation experience with the native speaker in class. The teacher will use a check list to determine the progress made in speaking the Arabic language. The results of this study will provide language teachers with information related to the native speakers’ role in language and the progress the second language learners made after interacting with native speakers.

Keywords: speaking, language, interaction, culture

Procedia PDF Downloads 484
5656 Employee Engagement: Tool for Success of Higher Education in Thailand

Authors: Pooree Sakot, Marndarath Suksanga

Abstract:

Organizations are under increasing pressure to improve performance and maximize the contribution of every employee. Employee engagement has become an attractive business proposition. The triple bottom line consists of three Ps: profit, people and planet. It aims to measure the financial, social and environmental performance of the corporation over a period of time. People are the most important asset of every organization. Most of the studies suggest that employee engagement improves the bottom line in almost every instance and it is well worth all organizational efforts to actively engage employees. Engaged employees have an impact on productivity and financial performance. Efficient leadership and effective management can take place if emerging paradigm like employee engagement is appropriately understood and put into practice. Employee engagement starts at the first step i.e. recruitment of an employee to the last step i.e. retirement .The HR Practices of an organization play the most major role in helping the employees walk the extra mile. Effective employee engagement is the key component for improved organizational performance.

Keywords: employee engagement, higher education, tool, success

Procedia PDF Downloads 331
5655 Computed Tomography Brain and Inpatient Falls: An Audit Evaluating the Indications and Outcomes

Authors: Zain Khan, Steve Ahn, Kathy Monypenny, James Fink

Abstract:

In Australian public hospitals, there were approximately 34,000 reported inpatient falls between 2015 to 2016. The gold standard for diagnosing intracranial injury is non-contrast enhanced brain computed tomography (CTB). Over a three-month timeframe, a total of one hundred and eighty (180) falls were documented between the hours of 4pm and 8am at a large metro hospital. Only three (3) of these scans demonstrated a positive intra-cranial finding. The rationale for scanning varied. The common indications included a fall with head strike, the presence of blood thinning medication, loss of consciousness, reduced Glasgow Coma Scale (GCS), vomiting and new neurological findings. There are several validated tools to aid in decision-making around ordering CTB scans in the acute setting, but no such accepted tool exists for the inpatient space. With further data collection, spanning a greater length of time and through involving multiple centres, work can be done towards generating such a tool that can be utilized for inpatient falls.

Keywords: computed tomography, falls, inpatient, intracranial hemorrhage

Procedia PDF Downloads 168
5654 The International Classification of Functioning, Disability and Health (ICF) as a Problem-Solving Tool in Disability Rehabilitation and Education Alliance in Metabolic Disorders (DREAM) at Sultan Bin Abdul Aziz Humanitarian City:A Prototype for Reh

Authors: Hamzeh Awad

Abstract:

Disability is considered to be a worldwide complex phenomenon which rising at a phenomenal rate and caused by many different factors. Chronic diseases such as cardiovascular disease and diabetes can lead to mobility disability in particular and disability in general. The ICF is an integrative bio-psycho-social model of functioning and disability and considered by the World Health Organization (WHO) to be a reference for disability classification using its categories and core set to classify disorder’s functional limitations. Specialist programs at Sultan Bin Abdul Aziz Humanitarian City (SBAHC) are providing both inpatient and outpatient services have started to implement the ICF and use it as a problem solving tool in Rehab. Diabetes is leading contributing factor for disability and considered epidemic in several Gulf countries including the Kingdom of Saudi Arabia (KSA), where its prevalence continues to increase dramatically. Metabolic disorders, mainly diabetes are not well covered in Rehab field. The purpose of this study is present to research and clinical rehabilitation field of DREAM and ICF as a framework in clinical and research setting in Rehab service. Also, shed the light on using the ICF as problem solving tool at SBAHC. There are synergies between disability causes and wider public health priorities in relation to both chronic disease and disability prevention. Therefore, there is a need for strong advocacy and understanding of the role of ICF as a reference in Rehab settings in Middle East if we wish to seize the opportunity to reverse current trends of acquired disability in the region.

Keywords: international classification of functioning, disability and health (ICF), prototype, rehabilitation and diabetes

Procedia PDF Downloads 346
5653 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances

Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim

Abstract:

This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.

Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering

Procedia PDF Downloads 183
5652 Data Mining Spatial: Unsupervised Classification of Geographic Data

Authors: Chahrazed Zouaoui

Abstract:

In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.

Keywords: mining, GIS, geo-clustering, neighborhood

Procedia PDF Downloads 370
5651 Calculate Product Carbon Footprint through the Internet of Things from Network Science

Authors: Jing Zhang

Abstract:

To reduce the carbon footprint of mankind and become more sustainable is one of the major challenges in our era. Internet of Things (IoT) mainly resolves three problems: Things to Things (T2T), Human to Things, H2T), and Human to Human (H2H). Borrowing the classification of IoT, we can find carbon prints of industries also can be divided in these three ways. Therefore, monitoring the routes of generation and circulation of products may help calculate product carbon print. This paper does not consider any technique used by IoT itself, but the ideas of it look at the connection of products. Carbon prints are like a gene or mark of a product from raw materials to the final products, which never leave the products. The contribution of this paper is to combine the characteristics of IoT and the methodology of network science to find a way to calculate the product's carbon footprint. Life cycle assessment, LCA is a traditional and main tool to calculate the carbon print of products. LCA is a traditional but main tool, which includes three kinds.

Keywords: product carbon footprint, Internet of Things, network science, life cycle assessment

Procedia PDF Downloads 114