Search results for: extraction tool
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6812

Search results for: extraction tool

4622 Dissipation of Tebuconazole in Cropland Soils as Affected by Soil Factors

Authors: Bipul Behari Saha, Sunil Kumar Singh, P. Padmaja, Kamlesh Vishwakarma

Abstract:

Dissipation study of tebuconazole in alluvial, black and deep-black clayey soils collected from paddy, mango and peanut cropland of tropical agro-climatic zone of India at three concentration levels were carried out for monitoring the water contamination through persisted residual toxicity. The soil-slurry samples were analyzed by capillary GC-NPD methods followed by ultrasound-assisted extraction (UAE) technique and cleanup process. An excellent linear relationship between peak area and concentration obtained in the range 1 to 50 μgkg-1. The detection (S/N, 3 ± 0.5) and quantification (S/N, 7.5 ± 2.5) limits were 3 and 10 μgkg-1 respectively. Well spiked recoveries were achieved from 96.28 to 99.33 % at levels 5 and 20 μgkg-1 and method precision (% RSD) was ≤ 5%. The soils dissipation of tebuconazole was fitted in first order kinetic-model with half-life between 34.48 to 48.13 days. The soil organic-carbon (SOC) content correlated well with the dissipation rate constants (DRC) of the fungicide Tebuconazole. An increase in the SOC content resulted in faster dissipation. The results indicate that the soil organic carbon and tebuconazole concentrations plays dominant role in dissipation processes. The initial concentration illustrated that the degradation rate of tebuconazole in soils was concentration dependent.

Keywords: cropland soil, dissipation, laboratory incubation, tebuconazole

Procedia PDF Downloads 253
4621 Masked Candlestick Model: A Pre-Trained Model for Trading Prediction

Authors: Ling Qi, Matloob Khushi, Josiah Poon

Abstract:

This paper introduces a pre-trained Masked Candlestick Model (MCM) for trading time-series data. The pre-trained model is based on three core designs. First, we convert trading price data at each data point as a set of normalized elements and produce embeddings of each element. Second, we generate a masked sequence of such embedded elements as inputs for self-supervised learning. Third, we use the encoder mechanism from the transformer to train the inputs. The masked model learns the contextual relations among the sequence of embedded elements, which can aid downstream classification tasks. To evaluate the performance of the pre-trained model, we fine-tune MCM for three different downstream classification tasks to predict future price trends. The fine-tuned models achieved better accuracy rates for all three tasks than the baseline models. To better analyze the effectiveness of MCM, we test the same architecture for three currency pairs, namely EUR/GBP, AUD/USD, and EUR/JPY. The experimentation results demonstrate MCM’s effectiveness on all three currency pairs and indicate the MCM’s capability for signal extraction from trading data.

Keywords: masked language model, transformer, time series prediction, trading prediction, embedding, transfer learning, self-supervised learning

Procedia PDF Downloads 128
4620 Subsidying Local Health Policy Programs as a Public Management Tool in the Polish Health Care System

Authors: T. Holecki, J. Wozniak-Holecka, P. Romaniuk

Abstract:

Due to the highly centralized model of financing health care in Poland, local self-government rarely undertook their own initiatives in the field of public health, particularly health promotion. However, since 2017 the possibility of applying for a subsidy to health policy programs has been allowed, with the additional resources to be retrieved from the National Health Fund, which is the dominant payer in the health system. The amount of subsidy depends on the number of inhabitants in a given unit and ranges about 40% of the total cost of the program. The aim of this paper is to assess the impact of newly implemented solutions in financing health policy on the management of public finances, as well as on the activity provided by local self-government in health promotion. An effort to estimate the amount of expenses that both local governments, and the National Health Fund, spent on local health policy programs while implementing the new solutions. The research method is the analysis of financial data obtained from the National Health Fund and from local government units, as well as reports published by the Agency for Health Technology Assessment and Pricing, which holds substantive control over the health policy programs, and releases permission for their implementation. The study was based on a comparative analysis of expenditures on the implementation of health programs in Poland in years 2010-2018. The presentation of the results includes the inclusion of average annual expenditures of local government units per 1 inhabitant, the total number of positively evaluated applications and the percentage share in total expenditures of local governments (16 voivodships areas). The most essential purpose is to determine whether the assumptions of the subsidy program are working correctly in practice, and what are the real effects of introducing legislative changes into local government levels in the context of public health tasks. The assumption of the study was that the use of a new motivation tool in the field of public management would result in multiplication of resources invested in the provision of health policy programs. Preliminary conclusions show that financial expenditures changed significantly after the introduction of public funding at the level of 40%, obtaining an increase in funding from own funds of local governments at the level of 80 to 90%.

Keywords: health care system, health policy programs, local self-governments, public health management

Procedia PDF Downloads 156
4619 Study on Construction of 3D Topography by UAV-Based Images

Authors: Yun-Yao Chi, Chieh-Kai Tsai, Dai-Ling Li

Abstract:

In this paper, a method of fast 3D topography modeling using the high-resolution camera images is studied based on the characteristics of Unmanned Aerial Vehicle (UAV) system for low altitude aerial photogrammetry and the need of three dimensional (3D) urban landscape modeling. Firstly, the existing high-resolution digital camera with special design of overlap images is designed by reconstructing and analyzing the auto-flying paths of UAVs, which improves the self-calibration function to achieve the high precision imaging by software, and further increased the resolution of the imaging system. Secondly, several-angle images including vertical images and oblique images gotten by the UAV system are used for the detail measure of urban land surfaces and the texture extraction. Finally, the aerial photography and 3D topography construction are both developed in campus of Chang-Jung University and in Guerin district area in Tainan, Taiwan, provide authentication model for construction of 3D topography based on combined UAV-based camera images from system. The results demonstrated that the UAV system for low altitude aerial photogrammetry can be used in the construction of 3D topography production, and the technology solution in this paper offers a new, fast, and technical plan for the 3D expression of the city landscape, fine modeling and visualization.

Keywords: 3D, topography, UAV, images

Procedia PDF Downloads 304
4618 LCA/CFD Studies of Artisanal Brick Manufacture in Mexico

Authors: H. A. Lopez-Aguilar, E. A. Huerta-Reynoso, J. A. Gomez, J. A. Duarte-Moller, A. Perez-Hernandez

Abstract:

Environmental performance of artisanal brick manufacture was studied by Lifecycle Assessment (LCA) methodology and Computational Fluid Dynamics (CFD) analysis in Mexico. The main objective of this paper is to evaluate the environmental impact during artisanal brick manufacture. LCA cradle-to-gate approach was complemented with CFD analysis to carry out an Environmental Impact Assessment (EIA). The lifecycle includes the stages of extraction, baking and transportation to the gate. The functional unit of this study was the production of a single brick in Chihuahua, Mexico and the impact categories studied were carcinogens, respiratory organics and inorganics, climate change radiation, ozone layer depletion, ecotoxicity, acidification/ eutrophication, land use, mineral use and fossil fuels. Laboratory techniques for fuel characterization, gas measurements in situ, and AP42 emission factors were employed in order to calculate gas emissions for inventory data. The results revealed that the categories with greater impacts are ecotoxicity and carcinogens. The CFD analysis is helpful in predicting the thermal diffusion and contaminants from a defined source. LCA-CFD synergy complemented the EIA and allowed us to identify the problem of thermal efficiency within the system.

Keywords: LCA, CFD, brick, artisanal

Procedia PDF Downloads 393
4617 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution

Authors: Haiyan Wu, Ying Liu, Shaoyun Shi

Abstract:

Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.

Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction

Procedia PDF Downloads 136
4616 Enhanced COVID-19 Pharmaceuticals and Microplastics Removal from Wastewater Using Hybrid Reactor System

Authors: Reda Dzingelevičienė, Vytautas Abromaitis, Nerijus Dzingelevičius, Kęstutis Baranauskis, Saulius Raugelė, Malgorzata Mlynska-Szultka, Sergej Suzdalev, Reza Pashaei, Sajjad Abbasi, Boguslaw Buszewski

Abstract:

A unique hybrid technology was developed for the removal of COVID-19 specific contaminants from wastewater. Reactor testing was performed using model water samples contaminated with COVID-19 pharmaceuticals and microplastics. Different hydraulic retention times, concentrations of pollutants and dissolved ozone were tested. Liquid Chromatography-Mass Spectrometry, solid phase extraction, surface area and porosity, analytical tools were used to monitor the treatment efficiency and remaining sorption capacity of the spent adsorbent. The combination of advanced oxidation and adsorption processes was found to be the most effective, with the highest 90-99% and 89-95% molnupiravir and microplastics contaminants removal efficiency from the model wastewater. The research has received funding from the European Regional Development Fund (project No 13.1.1-LMT-K-718-05-0014) under a grant agreement with the Research Council of Lithuania (LMTLT), and it was funded as part of the European Union’s measure in response to the COVID-19 pandemic.

Keywords: adsorption, hybrid reactor system, pharmaceuticals-microplastics, wastewater

Procedia PDF Downloads 87
4615 Maintenance Work Order Management Tool (Desktop & Mobile Solution)

Authors: Haitham Al Rawahi

Abstract:

Oman Electricity Transmission Company (OETC) has implemented Computerized Maintenance Management System (CMMS), which is based on Oracle enterprise asset management model e-AM. This was implemented with cooperation of Nama Shared Services (NSS). CMMS is mainly used to create maintenance work orders with a preconfigured workflow of defined maintenance schedules/plans, required resources, and materials, obtaining shutdown approvals, completing maintenance activities, and closing the work orders. Furthermore, CMMS is also configured with asset failure classifications, asset hierarchy, asset maintenance activities, integration with spare inventories, etc. Since the year 2017, site engineer is working on CMMS by filling-in manually all related maintenance and inspection records on paper forms and then scanning and attaching it in CMMS for further analysis. Site engineer will finalize all paper works at site and then goes back to office to scan and attach it to work order in CMMS. This creates sub tasks for site engineer and makes it very difficult and lengthy process. Also, there is a significant risk for missing or deleted important fields on the paper due to usage of pen to fill the paper. In addition to that, site engineer may take time and days working outside of the office. therefore, OETC has decided to digitize these inspection and maintenance forms in one platform in CMMS, and it can be opened with both functionalities online and offline. The ArcGIS product formats or web-enabled solutions which has ability to access from mobile and desktop devices via arc map modules will be used too. The purpose of interlinking is to setup for maintenance and inspection forms to work orders in e-AM, which the site engineer has daily interactions with. This ArcGIS environment or tool is designed to link with e-AM, so when site engineer opens this application from the site and a window will take him through same ArcGIS. This window opens the maintenance forms and shows the required fields to fill-in and save the work through his mobile application. After saving his work with the availability of network (Off/In) line, notification will trigger to his line manager to review and take further actions (approve/reject/request more information). In this function, the user can see the assigned work orders to his departments as well as chart of all work orders with status. The approver has ability to see the statistics of all work.

Keywords: e-AM, GIS, CMMS, integration

Procedia PDF Downloads 97
4614 Using Technology to Deliver and Scale Early Childhood Development Services in Resource Constrained Environments: Case Studies from South Africa

Authors: Sonja Giese, Tess N. Peacock

Abstract:

South African based Innovation Edge is experimenting with technology to drive positive behavior change, enable data-driven decision making, and scale quality early years services. This paper uses five case studies to illustrate how technology can be used in resource-constrained environments to first, encourage parenting practices that build early language development (using a stage-based mobile messaging pilot, ChildConnect), secondly, to improve the quality of ECD programs (using a mobile application, CareUp), thirdly, how to affordably scale services for the early detection of visual and hearing impairments (using a mobile tool, HearX), fourthly, how to build a transparent and accountable system for the registration and funding of ECD (using a blockchain enabled platform, Amply), and finally enable rapid data collection and feedback to facilitate quality enhancement of programs at scale (the Early Learning Outcomes Measure). ChildConnect and CareUp were both developed using a design based iterative research approach. The usage and uptake of ChildConnect and CareUp was evaluated with qualitative and quantitative methods. Actual child outcomes were not measured in the initial pilots. Although parents who used and engaged on either platform felt more supported and informed, parent engagement and usage remains a challenge. This is contrast to ECD practitioners whose usage and knowledge with CareUp showed both sustained engagement and knowledge improvement. HearX is an easy-to-use tool to identify hearing loss and visual impairment. The tool was tested with 10000 children in an informal settlement. The feasibility of cost-effectively decentralising screening services was demonstrated. Practical and financial barriers remain with respect to parental consent and for successful referrals. Amply uses mobile and blockchain technology to increase impact and accountability of public services. In the pilot project, Amply is being used to replace an existing paper-based system to register children for a government-funded pre-school subsidy in South Africa. Early Learning Outcomes Measure defines what it means for a child to be developmentally ‘on track’ at aged 50-69 months. ELOM administration is enabled via a tablet which allows for easy and accurate data collection, transfer, analysis, and feedback. ELOM is being used extensively to drive quality enhancement of ECD programs across multiple modalities. The nature of ECD services in South Africa is that they are in large part provided by disconnected private individuals or Non-Governmental Organizations (in contrast to basic education which is publicly provided by the government). It is a disparate sector which means that scaling successful interventions is that much harder. All five interventions show the potential of technology to support and enhance a range of ECD services, but pathways to scale are still being tested.

Keywords: assessment, behavior change, communication, data, disabilities, mobile, scale, technology, quality

Procedia PDF Downloads 133
4613 Functional Properties of Sunflower Protein Concentrates Extracted Using Different Anti-greening Agents - Low-Fat Whipping Cream Preparation

Authors: Tamer M. El-Messery

Abstract:

By-products from sunflower oil extraction, such as sunflower cakes, are rich sources of proteins with desirable functional properties for the food industry. However, challenges such as sensory drawbacks and the presence of phenolic compounds have hindered their widespread use. In this study, sunflower protein concentrates were obtained from sunflower cakes using different ant-greening solvents (ascorbic acid (ASC) and N-acetylcysteine (NAC)), and their functional properties were evaluated. The color of extracted proteins ranged from dark green to yellow, where the using of ASC and NAC agents enhanced the color. The protein concentrates exhibited high solubility (>70%) and antioxidant activity, with hydrophobicity influencing emulsifying activity. Emulsions prepared with these proteins showed stability and microencapsulation efficiency. Incorporation of protein concentrates into low-fat whipping cream formulations increased overrun and affected color characteristics. Rheological studies demonstrated pseudoplastic behavior in whipped cream, influenced by shear rates and protein content. Overall, sunflower protein isolates showed promising functional properties, indicating their potential as valuable ingredients in food formulations.

Keywords: functional properties, sunflower protein concentrates, antioxidant capacity, ant-greening agents, low-fat whipping cream

Procedia PDF Downloads 48
4612 Digital Retinal Images: Background and Damaged Areas Segmentation

Authors: Eman A. Gani, Loay E. George, Faisel G. Mohammed, Kamal H. Sager

Abstract:

Digital retinal images are more appropriate for automatic screening of diabetic retinopathy systems. Unfortunately, a significant percentage of these images are poor quality that hinders further analysis due to many factors (such as patient movement, inadequate or non-uniform illumination, acquisition angle and retinal pigmentation). The retinal images of poor quality need to be enhanced before the extraction of features and abnormalities. So, the segmentation of retinal image is essential for this purpose, the segmentation is employed to smooth and strengthen image by separating the background and damaged areas from the overall image thus resulting in retinal image enhancement and less processing time. In this paper, methods for segmenting colored retinal image are proposed to improve the quality of retinal image diagnosis. The methods generate two segmentation masks; i.e., background segmentation mask for extracting the background area and poor quality mask for removing the noisy areas from the retinal image. The standard retinal image databases DIARETDB0, DIARETDB1, STARE, DRIVE and some images obtained from ophthalmologists have been used to test the validation of the proposed segmentation technique. Experimental results indicate the introduced methods are effective and can lead to high segmentation accuracy.

Keywords: retinal images, fundus images, diabetic retinopathy, background segmentation, damaged areas segmentation

Procedia PDF Downloads 403
4611 Enabling Wire Arc Additive Manufacturing in Aircraft Landing Gear Production and Its Benefits

Authors: Jun Wang, Chenglei Diao, Emanuele Pagone, Jialuo Ding, Stewart Williams

Abstract:

As a crucial component in aircraft, landing gear systems are responsible for supporting the plane during parking, taxiing, takeoff, and landing. Given the need for high load-bearing capacity over extended periods, 300M ultra-high strength steel (UHSS) is often the material of choice for crafting these systems due to its exceptional strength, toughness, and fatigue resistance. In the quest for cost-effective and sustainable manufacturing solutions, Wire Arc Additive Manufacturing (WAAM) emerges as a promising alternative for fabricating 300M UHSS landing gears. This is due to its advantages in near-net-shape forming of large components, cost-efficiency, and reduced lead times. Cranfield University has conducted an extensive preliminary study on WAAM 300M UHSS, covering feature deposition, interface analysis, and post-heat treatment. Both Gas Metal Arc (GMA) and Plasma Transferred Arc (PTA)-based WAAM methods were explored, revealing their feasibility for defect-free manufacturing. However, as-deposited 300M features showed lower strength but higher ductility compared to their forged counterparts. Subsequent post-heat treatments were effective in normalising the microstructure and mechanical properties, meeting qualification standards. A 300M UHSS landing gear demonstrator was successfully created using PTA-based WAAM, showcasing the method's precision and cost-effectiveness. The demonstrator, measuring Ф200mm x 700mm, was completed in 16 hours, using 7 kg of material at a deposition rate of 1.3kg/hr. This resulted in a significant reduction in the Buy-to-Fly (BTF) ratio compared to traditional manufacturing methods, further validating WAAM's potential for this application. A "cradle-to-gate" environmental impact assessment, which considers the cumulative effects from raw material extraction to customer shipment, has revealed promising outcomes. Utilising Wire Arc Additive Manufacturing (WAAM) for landing gear components significantly reduces the need for raw material extraction and refinement compared to traditional subtractive methods. This, in turn, lessens the burden on subsequent manufacturing processes, including heat treatment, machining, and transportation. Our estimates indicate that the carbon footprint of the component could be halved when switching from traditional machining to WAAM. Similar reductions are observed in embodied energy consumption and other environmental impact indicators, such as emissions to air, water, and land. Additionally, WAAM offers the unique advantage of part repair by redepositing only the necessary material, a capability not available through conventional methods. Our research shows that WAAM-based repairs can drastically reduce environmental impact, even when accounting for additional transportation for repairs. Consequently, WAAM emerges as a pivotal technology for reducing environmental impact in manufacturing, aiding the industry in its crucial and ambitious journey towards Net Zero. This study paves the way for transformative benefits across the aerospace industry, as we integrate manufacturing into a hybrid solution that offers substantial savings and access to more sustainable technologies for critical component production.

Keywords: WAAM, aircraft landing gear, microstructure, mechanical performance, life cycle assessment

Procedia PDF Downloads 159
4610 Translation as a Foreign Language Teaching Tool: Results of an Experiment with University Level Students in Spain

Authors: Nune Ayvazyan

Abstract:

Since the proclamation of monolingual foreign-language learning methods (the Berlitz Method in the early 20ᵗʰ century and the like), the dilemma has been to allow or not to allow learners’ mother tongue in the foreign-language learning process. The reason for not allowing learners’ mother tongue is reported to create a situation of immersion where students will only use the target language. It could be argued that this artificial monolingual situation is defective, mainly because there are very few real monolingual situations in the society. This is mainly due to the fact that societies are nowadays increasingly multilingual as plurilingual speakers are the norm rather than an exception. More recently, the use of learners’ mother tongue and translation has been put under the spotlight as valid foreign-language teaching tools. The logic dictates that if learners were permitted to use their mother tongue in the foreign-language learning process, that would not only be natural, but also would give them additional means of participation in class, which could eventually lead to learning. For example, when learners’ metalinguistic skills are poor in the target language, a question they might have could be asked in their mother tongue. Otherwise, that question might be left unasked. Attempts at empirically testing the role of translation as a didactic tool in foreign-language teaching are still very scant. In order to fill this void, this study looks into the interaction patterns between students in two kinds of English-learning classes: one with translation and the other in English only (immersion). The experiment was carried out with 61 students enrolled in a second-year university subject in English grammar in Spain. All the students underwent the two treatments, classes with translation and in English only, in order to see how they interacted under the different conditions. The analysis centered on four categories of interaction: teacher talk, teacher-initiated student interaction, student-initiated student-to-teacher interaction, and student-to-student interaction. Also, pre-experiment and post-experiment questionnaires and individual interviews gathered information about the students’ attitudes to translation. The findings show that translation elicited more student-initiated interaction than did the English-only classes, while the difference in teacher-initiated interactional turns was not statistically significant. Also, student-initiated participation was higher in comprehension-based activities (into L1) as opposed to production-based activities (into L2). As evidenced by the questionnaires, the students’ attitudes to translation were initially positive and mainly did not vary as a result of the experiment.

Keywords: foreign language, learning, mother tongue, translation

Procedia PDF Downloads 162
4609 Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework

Authors: U. S. N. Raju, Kothuri Sai Kiran, Meena G. Kamal, Vinay Nikhil Pabba, Suresh Kanaparthi

Abstract:

There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework.

Keywords: video lectures, big video data, video retrieval, hadoop

Procedia PDF Downloads 534
4608 Quantitative Polymerase Chain Reaction Analysis of Phytoplankton Composition and Abundance to Assess Eutrophication: A Multi-Year Study in Twelve Large Rivers across the United States

Authors: Chiqian Zhang, Kyle D. McIntosh, Nathan Sienkiewicz, Ian Struewing, Erin A. Stelzer, Jennifer L. Graham, Jingrang Lu

Abstract:

Phytoplankton plays an essential role in freshwater aquatic ecosystems and is the primary group synthesizing organic carbon and providing food sources or energy to ecosystems. Therefore, the identification and quantification of phytoplankton are important for estimating and assessing ecosystem productivity (carbon fixation), water quality, and eutrophication. Microscopy is the current gold standard for identifying and quantifying phytoplankton composition and abundance. However, microscopic analysis of phytoplankton is time-consuming, has a low sample throughput, and requires deep knowledge and rich experience in microbial morphology to implement. To improve this situation, quantitative polymerase chain reaction (qPCR) was considered for phytoplankton identification and quantification. Using qPCR to assess phytoplankton composition and abundance, however, has not been comprehensively evaluated. This study focused on: 1) conducting a comprehensive performance comparison of qPCR and microscopy techniques in identifying and quantifying phytoplankton and 2) examining the use of qPCR as a tool for assessing eutrophication. Twelve large rivers located throughout the United States were evaluated using data collected from 2017 to 2019 to understand the relation between qPCR-based phytoplankton abundance and eutrophication. This study revealed that temporal variation of phytoplankton abundance in the twelve rivers was limited within years (from late spring to late fall) and among different years (2017, 2018, and 2019). Midcontinent rivers had moderately greater phytoplankton abundance than eastern and western rivers, presumably because midcontinent rivers were more eutrophic. The study also showed that qPCR- and microscope-determined phytoplankton abundance had a significant positive linear correlation (adjusted R² 0.772, p-value < 0.001). In addition, phytoplankton abundance assessed via qPCR showed promise as an indicator of the eutrophication status of those rivers, with oligotrophic rivers having low phytoplankton abundance and eutrophic rivers having (relatively) high phytoplankton abundance. This study demonstrated that qPCR could serve as an alternative tool to traditional microscopy for phytoplankton quantification and eutrophication assessment in freshwater rivers.

Keywords: phytoplankton, eutrophication, river, qPCR, microscopy, spatiotemporal variation

Procedia PDF Downloads 101
4607 Learning Language through Story: Development of Storytelling Website Project for Amazighe Language Learning

Authors: Siham Boulaknadel

Abstract:

Every culture has its share of a rich history of storytelling in oral, visual, and textual form. The Amazigh language, as many languages, has its own which has entertained and informed across centuries and cultures, and its instructional potential continues to serve teachers. According to many researchers, listening to stories draws attention to the sounds of language and helps children develop sensitivity to the way language works. Stories including repetitive phrases, unique words, and enticing description encourage students to join in actively to repeat, chant, sing, or even retell the story. This kind of practice is important to language learners’ oral language development, which is believed to correlate completely with student’s academic success. Today, with the advent of multimedia, digital storytelling for instance can be a practical and powerful learning tool. It has the potential in transforming traditional learning into a world of unlimited imaginary environment. This paper reports on a research project on development of multimedia Storytelling Website using traditional Amazigh oral narratives called “tell me a story”. It is a didactic tool created for the learning of good moral values in an interactive multimedia environment combining on-screen text, graphics and audio in an enticing environment and enabling the positive values of stories to be projected. This Website developed in this study is based on various pedagogical approaches and learning theories deemed suitable for children age 8 to 9 year-old. The design and development of Website was based on a well-researched conceptual framework enabling users to: (1) re-play and share the stories in schools or at home, and (2) access the Website anytime and anywhere. Furthermore, the system stores the students work and activities over the system, allowing parents or teachers to monitor students’ works, and provide online feedback. The Website contains following main feature modules: Storytelling incorporates a variety of media such as audio, text and graphics in presenting the stories. It introduces the children to various kinds of traditional Amazigh oral narratives. The focus of this module is to project the positive values and images of stories using digital storytelling technique. Besides development good moral sense in children using projected positive images and moral values, it also allows children to practice their comprehending and listening skills. Reading module is developed based on multimedia material approach which offers the potential for addressing the challenges of reading instruction. This module is able to stimulate children and develop reading practice indirectly due to the tutoring strategies of scaffolding, self-explanation and hyperlinks offered in this module. Word Enhancement assists the children in understanding the story and appreciating the good moral values more efficiently. The difficult words or vocabularies are attached to present the explanation, which makes the children understand the vocabulary better. In conclusion, we believe that the interactive multimedia storytelling reveals an interesting and exciting tool for learning Amazigh. We plan to address some learning issues, in particularly the uses of activities to test and evaluate the children on their overall understanding of story and words presented in the learning modules.

Keywords: Amazigh language, e-learning, storytelling, language teaching

Procedia PDF Downloads 404
4606 Iterative Reconstruction Techniques as a Dose Reduction Tool in Pediatric Computed Tomography Imaging: A Phantom Study

Authors: Ajit Brindhaban

Abstract:

Background and Purpose: Computed Tomography (CT) scans have become the largest source of radiation in radiological imaging. The purpose of this study was to compare the quality of pediatric Computed Tomography (CT) images reconstructed using Filtered Back Projection (FBP) with images reconstructed using different strengths of Iterative Reconstruction (IR) technique, and to perform a feasibility study to assess the use of IR techniques as a dose reduction tool. Materials and Methods: An anthropomorphic phantom representing a 5-year old child was scanned, in two stages, using a Siemens Somatom CT unit. In stage one, scans of the head, chest and abdomen were performed using standard protocols recommended by the scanner manufacturer. Images were reconstructed using FBP and 5 different strengths of IR. Contrast-to-Noise Ratios (CNR) were calculated from average CT number and its standard deviation measured in regions of interest created in the lungs, bone, and soft tissues regions of the phantom. Paired t-test and the one-way ANOVA were used to compare the CNR from FBP images with IR images, at p = 0.05 level. The lowest strength value of IR that produced the highest CNR was identified. In the second stage, scans of the head was performed with decreased mA(s) values relative to the increase in CNR compared to the standard FBP protocol. CNR values were compared in this stage using Paired t-test at p = 0.05 level. Results: Images reconstructed using IR technique had higher CNR values (p < 0.01.) in all regions compared to the FBP images, at all strengths of IR. The CNR increased with increasing IR strength of up to 3, in the head and chest images. Increases beyond this strength were insignificant. In abdomen images, CNR continued to increase up to strength 5. The results also indicated that, IR techniques improve CNR by a up to factor of 1.5. Based on the CNR values at strength 3 of IR images and CNR values of FBP images, a reduction in mA(s) of about 20% was identified. The images of the head acquired at 20% reduced mA(s) and reconstructed using IR at strength 3, had similar CNR as FBP images at standard mA(s). In the head scans of the phantom used in this study, it was demonstrated that similar CNR can be achieved even when the mA(s) is reduced by about 20% if IR technique with strength of 3 is used for reconstruction. Conclusions: The IR technique produced better image quality at all strengths of IR in comparison to FBP. IR technique can provide approximately 20% dose reduction in pediatric head CT while maintaining the same image quality as FBP technique.

Keywords: filtered back projection, image quality, iterative reconstruction, pediatric computed tomography imaging

Procedia PDF Downloads 148
4605 Energy Efficiency Index Applied to Reactive Systems

Authors: P. Góes, J. Manzi

Abstract:

This paper focuses on the development of an energy efficiency index that will be applied to reactive systems, which is based in the First and Second Law of Thermodynamics, by giving particular consideration to the concept of maximum entropy. Among the requirements of such energy efficiency index, the practical feasibility must be essential. To illustrate the performance of the proposed index, such an index was used as decisive factor of evaluation for the optimization process of an industrial reactor. The results allow the conclusion to be drawn that the energy efficiency index applied to the reactive system is consistent because it extracts the information expected of an efficient indicator, and that it is useful as an analytical tool besides being feasible from a practical standpoint. Furthermore, it has proved to be much simpler to use than tools based on traditional methodologies.

Keywords: energy, efficiency, entropy, reactive

Procedia PDF Downloads 412
4604 Development of Verification System of Workspace Clashes Between Construction Activities

Authors: Hyeon-Seung Kim, Sang-Mi Park, Min-Seo Kim, Jong-Myeung Shin, Leen-Seok Kang

Abstract:

Recently, the use of Building Information Modeling (BIM) in public construction works has become mandatory in some countries and it is anticipated that BIM will be applied to the actual field of civil engineering projects. However, the BIM system is still focused on the architectural project and the design phase. Because the civil engineering project is linear type project and is focused on the construction phase comparing with architectural project, 3D simulation is difficult to visualize them. This study suggests a method and a prototype system to solve workspace conflictions among construction activities using BIM simulation tool.

Keywords: BIM, workspace, confliction, visualization

Procedia PDF Downloads 408
4603 An Improved K-Means Algorithm for Gene Expression Data Clustering

Authors: Billel Kenidra, Mohamed Benmohammed

Abstract:

Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.

Keywords: microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization

Procedia PDF Downloads 190
4602 Democracy in Gaming: An Artificial Neural Network Based Approach towards Rule Evolution

Authors: Nelvin Joseph, K. Krishna Milan Rao, Praveen Dwarakanath

Abstract:

The explosive growth of Smart phones around the world has led to the shift of the primary engagement tool for entertainment from traditional consoles and music players to an all integrated device. Augmented Reality is the next big shift in bringing in a new dimension to the play. The paper explores the construct and working of the community engine in Delta T – an Augmented Reality game that allows users to evolve rules in the game basis collective bargaining mirroring democracy even in a gaming world.

Keywords: augmented reality, artificial neural networks, mobile application, human computer interaction, community engine

Procedia PDF Downloads 332
4601 Artificial Neural Network and Statistical Method

Authors: Tomas Berhanu Bekele

Abstract:

Traffic congestion is one of the main problems related to transportation in developed as well as developing countries. Traffic control systems are based on the idea of avoiding traffic instabilities and homogenizing traffic flow in such a way that the risk of accidents is minimized and traffic flow is maximized. Lately, Intelligent Transport Systems (ITS) has become an important area of research to solve such road traffic-related issues for making smart decisions. It links people, roads and vehicles together using communication technologies to increase safety and mobility. Moreover, accurate prediction of road traffic is important to manage traffic congestion. The aim of this study is to develop an ANN model for the prediction of traffic flow and to compare the ANN model with the linear regression model of traffic flow predictions. Data extraction was carried out in intervals of 15 minutes from the video player. Video of mixed traffic flow was taken and then counted during office work in order to determine the traffic volume. Vehicles were classified into six categories, namely Car, Motorcycle, Minibus, mid-bus, Bus, and Truck vehicles. The average time taken by each vehicle type to travel the trap length was measured by time displayed on a video screen.

Keywords: intelligent transport system (ITS), traffic flow prediction, artificial neural network (ANN), linear regression

Procedia PDF Downloads 67
4600 Multi-Granularity Feature Extraction and Optimization for Pathological Speech Intelligibility Evaluation

Authors: Chunying Fang, Haifeng Li, Lin Ma, Mancai Zhang

Abstract:

Speech intelligibility assessment is an important measure to evaluate the functional outcomes of surgical and non-surgical treatment, speech therapy and rehabilitation. The assessment of pathological speech plays an important role in assisting the experts. Pathological speech usually is non-stationary and mutational, in this paper, we describe a multi-granularity combined feature schemes, and which is optimized by hierarchical visual method. First of all, the difference granularity level pathological features are extracted which are BAFS (Basic acoustics feature set), local spectral characteristics MSCC (Mel s-transform cepstrum coefficients) and nonlinear dynamic characteristics based on chaotic analysis. Latterly, radar chart and F-score are proposed to optimize the features by the hierarchical visual fusion. The feature set could be optimized from 526 to 96-dimensions.The experimental results denote that new features by support vector machine (SVM) has the best performance, with a recognition rate of 84.4% on NKI-CCRT corpus. The proposed method is thus approved to be effective and reliable for pathological speech intelligibility evaluation.

Keywords: pathological speech, multi-granularity feature, MSCC (Mel s-transform cepstrum coefficients), F-score, radar chart

Procedia PDF Downloads 283
4599 Generative Pre-Trained Transformers (GPT-3) and Their Impact on Higher Education

Authors: Sheelagh Heugh, Michael Upton, Kriya Kalidas, Stephen Breen

Abstract:

This article aims to create awareness of the opportunities and issues the artificial intelligence (AI) tool GPT-3 (Generative Pre-trained Transformer-3) brings to higher education. Technological disruptors have featured in higher education (HE) since Konrad Klaus developed the first functional programmable automatic digital computer. The flurry of technological advances, such as personal computers, smartphones, the world wide web, search engines, and artificial intelligence (AI), have regularly caused disruption and discourse across the educational landscape around harnessing the change for the good. Accepting AI influences are inevitable; we took mixed methods through participatory action research and evaluation approach. Joining HE communities, reviewing the literature, and conducting our own research around Chat GPT-3, we reviewed our institutional approach to changing our current practices and developing policy linked to assessments and the use of Chat GPT-3. We review the impact of GPT-3, a high-powered natural language processing (NLP) system first seen in 2020 on HE. Historically HE has flexed and adapted with each technological advancement, and the latest debates for educationalists are focusing on the issues around this version of AI which creates natural human language text from prompts and other forms that can generate code and images. This paper explores how Chat GPT-3 affects the current educational landscape: we debate current views around plagiarism, research misconduct, and the credibility of assessment and determine the tool's value in developing skills for the workplace and enhancing critical analysis skills. These questions led us to review our institutional policy and explore the effects on our current assessments and the development of new assessments. Conclusions: After exploring the pros and cons of Chat GTP-3, it is evident that this form of AI cannot be un-invented. Technology needs to be harnessed for positive outcomes in higher education. We have observed that materials developed through AI and potential effects on our development of future assessments and teaching methods. Materials developed through Chat GPT-3 can still aid student learning but lead to redeveloping our institutional policy around plagiarism and academic integrity.

Keywords: artificial intelligence, Chat GPT-3, intellectual property, plagiarism, research misconduct

Procedia PDF Downloads 89
4598 Hydroxyapatite from Biowaste for the Reinforcement of Polymer

Authors: John O. Akindoyo, M. D. H. Beg, Suriati Binti Ghazali, Nitthiyah Jeyaratnam

Abstract:

Regeneration of bone due to the many health challenges arising from traumatic effects of bone loss, bone tumours and other bone infections is fast becoming indispensable. Over the period of time, some approaches have been undertaken to mitigate this challenge. This includes but not limited to xenografts, allografts, autografts as well as artificial substitutions like bioceramics, synthetic cements and metals. However, most of these techniques often come along with peculiar limitation and problems such as morbidity, availability, disease transmission, collateral site damage or absolute rejection by the body as the case may be. Hydroxyapatite (HA) is very compatible and suitable for this application. However, most of the common methods for HA synthesis are expensive and environmentally unfriendly. Extraction of HA from bio-wastes have been perceived not only to be cost effective, but also environment-friendly. In this research, HA was produced from bio-waste: namely bovine bones through a combination of hydrothermal chemical processes and ordinary calcination techniques. Structure and property of the HA was carried out through different characterization techniques (such as TGA, FTIR, DSC, XRD and BET). The synthesized HA was found to possess similar properties to stoichiometric HA with highly desirable thermal, degradation, structural and porous properties. This material is unique for its potential minimal cost, environmental friendliness and property controllability. It is also perceived to be suitable for tissue and bone engineering applications.

Keywords: biomaterial, biopolymer, bone, hydroxyapatite

Procedia PDF Downloads 321
4597 Control of a Wind Energy Conversion System Works in Tow Operating Modes (Hyper Synchronous and Hypo Synchronous)

Authors: A. Moualdia, D. J. Boudana, O. Bouchhida, A. Medjber

Abstract:

Wind energy has many advantages, it does not pollute and it is an inexhaustible source. However, the cost of this energy is still too high to compete with traditional fossil fuels, especially on sites less windy. The performance of a wind turbine depends on three parameters: the power of wind, the power curve of the turbine and the generator's ability to respond to wind fluctuations. This paper presents a control chain conversion based on a double-fed asynchronous machine and flow-oriented. The supply system comprises of two identical converters, one connected to the rotor and the other one connected to the network via a filter. The architecture of the device is up by three commands are necessary for the operation of the turbine control extraction of maximum power of the wind to control itself (MPPT) control of the rotor side converter controlling the electromagnetic torque and stator reactive power and control of the grid side converter by controlling the DC bus voltage and active power and reactive power exchanged with the network. The proposed control has been validated in both modes of operation of the three-bladed wind 7.5 kW, using Matlab/Simulink. The results of simulation control technology study provide good dynamic performance and static.

Keywords: D.F.I.G, variable wind speed, hypersynchrone, energy quality, hyposynchrone

Procedia PDF Downloads 367
4596 Water Purification By Novel Nanocomposite Membrane

Authors: E. S. Johal, M. S. Saini, M. K. Jha

Abstract:

Currently, 1.1 billion people are at risk due to lack of clean water and about 35 % of people in the developed world die from water related problem. To alleviate these problems water purification technology requires new approaches for effective management and conservation of water resources. Electrospun nanofibres membrane has a potential for water purification due to its high large surface area and good mechanical strength. In the present study PAMAM dendrimers composite nynlon-6 nanofibres membrane was prepared by crosslinking method using Glutaraldehyde. Further, the efficacy of the modified membrane can be renewed by mere exposure of the saturated membrane with the solution having acidic pH. The modified membrane can be used as an effective tool for water purification.

Keywords: dendrimer, nanofibers, nanocomposite membrane, water purification

Procedia PDF Downloads 356
4595 Spatial Analysis as a Tool to Assess Risk Management in Peru

Authors: Josué Alfredo Tomas Machaca Fajardo, Jhon Elvis Chahua Janampa, Pedro Rau Lavado

Abstract:

A flood vulnerability index was developed for the Piura River watershed in northern Peru using Principal Component Analysis (PCA) to assess flood risk. The official methodology to assess risk from natural hazards in Peru was introduced in 1980 and proved effective for aiding complex decision-making. This method relies in part on decision-makers defining subjective correlations between variables to identify high-risk areas. While risk identification and ensuing response activities benefit from a qualitative understanding of influences, this method does not take advantage of the advent of national and international data collection efforts, which can supplement our understanding of risk. Furthermore, this method does not take advantage of broadly applied statistical methods such as PCA, which highlight central indicators of vulnerability. Nowadays, information processing is much faster and allows for more objective decision-making tools, such as PCA. The approach presented here develops a tool to improve the current flood risk assessment in the Peruvian basin. Hence, the spatial analysis of the census and other datasets provides a better understanding of the current land occupation and a basin-wide distribution of services and human populations, a necessary step toward ultimately reducing flood risk in Peru. PCA allows the simplification of a large number of variables into a few factors regarding social, economic, physical and environmental dimensions of vulnerability. There is a correlation between the location of people and the water availability mainly found in rivers. For this reason, a comprehensive vision of the population location around the river basin is necessary to establish flood prevention policies. The grouping of 5x5 km gridded areas allows the spatial analysis of flood risk rather than assessing political divisions of the territory. The index was applied to the Peruvian region of Piura, where several flood events occurred in recent past years, being one of the most affected regions during the ENSO events in Peru. The analysis evidenced inequalities for the access to basic services, such as water, electricity, internet and sewage, between rural and urban areas.

Keywords: assess risk, flood risk, indicators of vulnerability, principal component analysis

Procedia PDF Downloads 186
4594 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 108
4593 Using Corpora in Semantic Studies of English Adjectives

Authors: Oxana Lukoshus

Abstract:

The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.

Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies

Procedia PDF Downloads 314