Search results for: large language models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15823

Search results for: large language models

11113 Geo-Spatial Distribution of Radio Refractivity and the Influence of Fade Depth on Microwave Propagation Signals over Nigeria

Authors: Olalekan Lawrence Ojo

Abstract:

Designing microwave terrestrial propagation networks requires a thorough evaluation of the severity of multipath fading, especially at frequencies below 10 GHz. In nations like Nigeria, without a large enough databases to support the existing empirical models, the mistakes in the prediction technique intended for the evaluation may be severe. The need for higher bandwidth for various satellite applications makes the investigation of the effects of radio refractivity, fading due to multipath, and Geoclimatic factors on satellite propagation links more important. One of the key elements to take into account for the best functioning of microwave frequencies is the clear air effects. This work has taken into account the geographical distribution of radio refractivity and fades depth over a number of stations in Nigeria. Data from five locations in Nigeria—Akure, Enugu, Jos, Minna, and Sokoto—based on five-year (2017–2021) measurement methods of atmospheric pressure, relative, and humidity temperature—at two levels (ground surface and 100 m heights)—are studied to deduced their effects on signals propagated through a µwave communication links. The assessments included considerations for µwave communication systems as well as the impacts of the dry and wet components of radio refractivity, the effects of the fade depth at various frequencies, and a 20 km link distance. The results demonstrate that the percentage occurrence of the dry terms dominated the radio refractivity constituent at the surface level, contributing a minimum of about 78% and a maximum of about 92%, while at heights of 100 meters, the percentage occurrence of the dry terms dominated the radio refractivity constituent, contributing a minimum of about 79% and a maximum of about 92%. The spatial distribution reveals that, regardless of height, the country's tropical rainforest (TRF) and freshwater swampy mangrove (FWSM) regions reported the greatest values of radio refractivity. The statistical estimate shows that fading values can differ by as much as 1.5 dB, especially near the TRF and FWSM coastlines, even during clear air conditions. The current findings will be helpful for budgeting Earth-space microwave links, particularly for the rollout of Nigeria's 5G and 6G projected microcellular networks.

Keywords: fade depth, geoclimatic factor, refractivity, refractivity gradient

Procedia PDF Downloads 60
11112 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 171
11111 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling

Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo

Abstract:

Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.

Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield

Procedia PDF Downloads 430
11110 Surface Characterization of Zincblende and Wurtzite Semiconductors Using Nonlinear Optics

Authors: Hendradi Hardhienata, Tony Sumaryada, Sri Setyaningsih

Abstract:

Current progress in the field of nonlinear optics has enabled precise surface characterization in semiconductor materials. Nonlinear optical techniques are favorable due to their nondestructive measurement and ability to work in nonvacuum and ambient conditions. The advance of the bond hyperpolarizability models opens a wide range of nanoscale surface investigation including the possibility to detect molecular orientation at the surface of silicon and zincblende semiconductors, investigation of electric field induced second harmonic fields at the semiconductor interface, detection of surface impurities, and very recently, study surface defects such as twin boundary in wurtzite semiconductors. In this work, we show using nonlinear optical techniques, e.g. nonlinear bond models how arbitrary polarization of the incoming electric field in Rotational Anisotropy Spectroscopy experiments can provide more information regarding the origin of the nonlinear sources in zincblende and wurtzite semiconductor structure. In addition, using hyperpolarizability consideration, we describe how the nonlinear susceptibility tensor describing SHG can be well modelled using only few parameter because of the symmetry of the bonds. We also show how the third harmonic intensity feature shows considerable changes when the incoming field polarization angle is changed from s-polarized to p-polarized. We also propose a method how to investigate surface reconstruction and defects in wurtzite and zincblende structure at the nanoscale level.

Keywords: surface characterization, bond model, rotational anisotropy spectroscopy, effective hyperpolarizability

Procedia PDF Downloads 145
11109 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia

Authors: S. Cencek, A. Markun

Abstract:

Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.

Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines

Procedia PDF Downloads 223
11108 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 84
11107 Solid Waste Landfilling Practices, Related Problems and Sustainable Solutions in Turkey

Authors: Nükhet Konuk, N. Gamze Turan, Yüksel Ardalı

Abstract:

Solid waste management is the most environmental problem in Turkey as a result of the rapid increase in solid waste generation caused by the rapid population growth, urbanization, rapid industrialization and economic development. The large quantity of waste generated necessitates system of collection, transportation and disposal. The landfill method for the ultimate disposal of solid waste continues to be widely accepted and used due to its economic advantages. In Turkey, most of the disposal sites open dump areas. Open dump sites may result in serious urban, sanitary and environmental problems such as an unpleasant odor and the risk of explosion as well as groundwater contamination because of leachate percolation. Unsuitable management practices also result in the loss of resources and energy, which could be recycled and produced from a large part of the solid waste. Therefore, over the past few decades, particular attention has been drawn to the sustainable solid waste management as a response to the increase in environmental problems related to the disposal of waste. The objective of this paper is to assess the situation of landfilling practices in Turkey as a developing country and to identify any gaps in the system as currently applied. The results show that approximately 25 million tons of MSW are generated annually in Turkey. The percentage of MSW disposed to sanitary landfill is only 45% whereas more than 50% of MSW is disposed without any control.

Keywords: developing countries, open dumping, solid waste management, sustainable landfilling, sustainable solid waste management

Procedia PDF Downloads 285
11106 Collaborative Online International Learning with Different Learning Goals: A Second Language Curriculum Perspective

Authors: Andrew Nowlan

Abstract:

During the Coronavirus pandemic, collaborative online international learning (COIL) emerged as an alternative to overseas sojourns. However, now that face-to-face classes have resumed and students are studying abroad, the rationale for doing COIL is not always clear amongst educators and students. Also, the logistics of COIL become increasingly complicated when participants involved in a potential collaboration have different second language (L2) learning goals. In this paper, the researcher will report on a study involving two bilingual, cross-cultural COIL courses between students at a university in Japan and those studying in North America, from April to December, 2022. The students in Japan were enrolled in an intercultural communication class in their L2 of English, while the students in Canada and the United States were studying intermediate Japanese as their L2. Based on a qualitative survey and journaling data received from 31 students in Japan, and employing a transcendental phenomenological research design, the researcher will highlight the students’ essence of experience during COIL. Essentially, students benefited from the experience through improved communicative competences and increased knowledge of the target culture, even when the L2 learning goals between institutions differed. Students also reported that the COIL experience was effective in preparation for actual study abroad, as opposed to a replacement for it, which challenges the existing literature. Both educators and administrators will be exposed to the perceptions of Japanese university students towards COIL, which could be generalized to other higher education contexts, including those in Southeast Asia. Readers will also be exposed to ideas for developing more effective pre-departure study abroad programs and domestic intercultural curriculum through COIL, even when L2 learning goals may differ between participants.

Keywords: collaborative online international learning, study abroad, phenomenology, EdTech, intercultural communication

Procedia PDF Downloads 69
11105 Fabrication of Pure and Doped MAPbI3 Thin Films by One Step Chemical Vapor Deposition Method for Energy Harvesting Applications

Authors: S. V. N. Pammi, Soon-Gil Yoon

Abstract:

In the present study, we report a facile chemical vapor deposition (CVD) method for Perovskite MAPbI3 thin films by doping with Br and Cl. We performed a systematic optimization of CVD parameters such as deposition temperature, working pressure and annealing time and temperature to obtain high-quality films of CH3NH3PbI3, CH3NH3PbI3-xBrx and CH3NH3PbI3-xClx perovskite. Scanning electron microscopy and X-ray Diffraction pattern showed that the perovskite films have a large grain size when compared to traditional spin coated thin films. To the best of our knowledge, there are very few reports on highly quality perovskite thin films by various doping such as Br and Cl using one step CVD and there is scope for significant improvement in device efficiency. In addition, their band-gap can be conveniently and widely tuned via doping process. This deposition process produces perovskite thin films with large grain size, long diffusion length and high surface coverage. The enhancement of the output power, CH3NH3PbI3 (MAPbI3) dye films when compared to spin coated films and enhancement in output power by doping in doped films was demonstrated in detail. The facile one-step method for deposition of perovskite thin films shows a potential candidate for photovoltaic and energy harvesting applications.

Keywords: perovskite thin films, chemical vapor deposition, energy harvesting, photovoltaics

Procedia PDF Downloads 293
11104 Depth-Averaged Modelling of Erosion and Sediment Transport in Free-Surface Flows

Authors: Thomas Rowan, Mohammed Seaid

Abstract:

A fast finite volume solver for multi-layered shallow water flows with mass exchange and an erodible bed is developed. This enables the user to solve a number of complex sediment-based problems including (but not limited to), dam-break over an erodible bed, recirculation currents and bed evolution as well as levy and dyke failure. This research develops methodologies crucial to the under-standing of multi-sediment fluvial mechanics and waterway design. In this model mass exchange between the layers is allowed and, in contrast to previous models, sediment and fluid are able to transfer between layers. In the current study we use a two-step finite volume method to avoid the solution of the Riemann problem. Entrainment and deposition rates are calculated for the first time in a model of this nature. In the first step the governing equations are rewritten in a non-conservative form and the intermediate solutions are calculated using the method of characteristics. In the second stage, the numerical fluxes are reconstructed in conservative form and are used to calculate a solution that satisfies the conservation property. This method is found to be considerably faster than other comparative finite volume methods, it also exhibits good shock capturing. For most entrainment and deposition equations a bed level concentration factor is used. This leads to inaccuracies in both near bed level concentration and total scour. To account for diffusion, as no vertical velocities are calculated, a capacity limited diffusion coefficient is used. The additional advantage of this multilayer approach is that there is a variation (from single layer models) in bottom layer fluid velocity: this dramatically reduces erosion, which is often overestimated in simulations of this nature using single layer flows. The model is used to simulate a standard dam break. In the dam break simulation, as expected, the number of fluid layers utilised creates variation in the resultant bed profile, with more layers offering a higher deviation in fluid velocity . These results showed a marked variation in erosion profiles from standard models. The overall the model provides new insight into the problems presented at minimal computational cost.

Keywords: erosion, finite volume method, sediment transport, shallow water equations

Procedia PDF Downloads 208
11103 Functionalized Carbon-Base Fluorescent Nanoparticles for Emerging Contaminants Targeted Analysis

Authors: Alexander Rodríguez-Hernández, Arnulfo Rojas-Perez, Liz Diaz-Vazquez

Abstract:

The rise in consumerism over the past century has resulted in the creation of higher amounts of plasticizers, personal care products and other chemical substances, which enter and accumulate in water systems. Other sources of pollutants in Neotropical regions experience large inputs of nutrients with these pollutants resulting in eutrophication of water which consume large quantities of oxygen, resulting in high fish mortality. This dilemma has created a need for the development of targeted detection in complex matrices and remediation of emerging contaminants. We have synthesized carbon nanoparticles from macro algae (Ulva fasciata) by oxidizing the graphitic carbon network under extreme acidic conditions. The resulting material was characterized by STEM, yielding a spherical 12 nm average diameter nanoparticles, which can be fixed into a polysaccharide aerogel synthesized from the same macro algae. Spectrophotometer analyses show a pH dependent fluorescent behavior varying from 450-620 nm in aqueous media. Heavily oxidized edges provide for easy functionalization with enzymes for a more targeted analysis and remediation technique. Given the optical properties of the carbon base nanoparticles and the numerous possibilities of functionalization, we have developed a selective and robust targeted bio-detection and bioremediation technique for the treatment of emerging contaminants in complex matrices like estuarine embayment.

Keywords: aerogels, carbon nanoparticles, fluorescent, targeted analysis

Procedia PDF Downloads 229
11102 MB-Slam: A Slam Framework for Construction Monitoring

Authors: Mojtaba Noghabaei, Khashayar Asadi, Kevin Han

Abstract:

Simultaneous Localization and Mapping (SLAM) technology has recently attracted the attention of construction companies for real-time performance monitoring. To effectively use SLAM for construction performance monitoring, SLAM results should be registered to a Building Information Models (BIM). Registring SLAM and BIM can provide essential insights for construction managers to identify construction deficiencies in real-time and ultimately reduce rework. Also, registering SLAM to BIM in real-time can boost the accuracy of SLAM since SLAM can use features from both images and 3d models. However, registering SLAM with the BIM in real-time is a challenge. In this study, a novel SLAM platform named Model-Based SLAM (MB-SLAM) is proposed, which not only provides automated registration of SLAM and BIM but also improves the localization accuracy of the SLAM system in real-time. This framework improves the accuracy of SLAM by aligning perspective features such as depth, vanishing points, and vanishing lines from the BIM to the SLAM system. This framework extracts depth features from a monocular camera’s image and improves the localization accuracy of the SLAM system through a real-time iterative process. Initially, SLAM can be used to calculate a rough camera pose for each keyframe. In the next step, each SLAM video sequence keyframe is registered to the BIM in real-time by aligning the keyframe’s perspective with the equivalent BIM view. The alignment method is based on perspective detection that estimates vanishing lines and points by detecting straight edges on images. This process will generate the associated BIM views from the keyframes' views. The calculated poses are later improved during a real-time gradient descent-based iteration method. Two case studies were presented to validate MB-SLAM. The validation process demonstrated promising results and accurately registered SLAM to BIM and significantly improved the SLAM’s localization accuracy. Besides, MB-SLAM achieved real-time performance in both indoor and outdoor environments. The proposed method can fully automate past studies and generate as-built models that are aligned with BIM. The main contribution of this study is a SLAM framework for both research and commercial usage, which aims to monitor construction progress and performance in a unified framework. Through this platform, users can improve the accuracy of the SLAM by providing a rough 3D model of the environment. MB-SLAM further boosts the application to practical usage of the SLAM.

Keywords: perspective alignment, progress monitoring, slam, stereo matching.

Procedia PDF Downloads 206
11101 Microbiota Effect with Cytokine in Hl and NHL Patient Group

Authors: Ekin Ece Gürer, Tarık Onur Tiryaki, Sevgi Kalayoğlu Beşışık, Fatma Savran Oğuz, Uğur Sezerman, Fatma Erdem, Gülşen Günel, Dürdane Serap Kuruca, Zerrin Aktaş, Oral Öncül

Abstract:

Aim: Chemotherapytreatment in HodgkinLymphomaandNon-HodgkinLymphoma (NHL) diseasescausesgastrointestinalepithelialdamage, disruptstheintestinalmicrobiotabalanceandcausesdysbiosis. Inourstudy, it wasaimedtoshowtheeffect of thedamagecausedbychemotherapy on themicrobiotaandtheeffect of thechangingmicrobiota flora on thecourse of thedisease. Materials And Methods: Seven adult HL and seven adult HL patients to be treatedwithchemotherapywereincluded in the study. Stoolsamplesweretakentwice, beforechemotherapytreatmentandafterthe 3th course of treatment. SamplesweresequencedusingNextGenerationSequencing (NGS) methodafternucleicacidisolation. OTU tableswerepreparedusing NCBI blastnversion 2.0.12 accordingtothe NCBI general 16S bacterialtaxonomyreferencedated 10.08.2021. Thegenerated OTU tableswerecalculatedwith R Statistical Computer Language version 4.04 (readr, phyloseq, microbiome, vegan, descrand ggplot2 packages) to calculate Alpha diversityandtheirgraphicswerecreated. Statistical analyzeswerealsoperformedusing R Statistical Computer Language version 4.0.4 and studio IDE 1.4 (tidyverse, readr, xlsxand ggplot2 packages). Expression of IL-12 and IL-17 cytokineswasperformedbyrtPCRtwice, beforeandaftertreatment. Results: InHL patients, a significantdecreasewasobserved in themicrobiota flora of Ruminococcaceae_UCG-014 genus (p:0.036) andUndefined Ruminococcaceae_UCG-014 species (p:0.036) comparedtopre-treatment. When the post-treatment of HL patientswerecomparedwithhealthycontrols, a significantdecreasewasfound in themicrobiota of Prevotella_7 genus (p:0.049) andButyricimonas (p:0.006) in the post-treatmentmicrobiota of HL patients. InNHL patients, a significantdecreasewasobserved in themicrobiota flora of Coprococccus_3 genus (p:0.015) andUndefined Ruminoclostridium_5 (p:0.046) speciescomparedtopre-treatment. When post-treatment of NHL patientswerecomparedwithhealthycontrols, a significantabundance in theBacilliclass (p:0.029) and a significantdecrease in theUndefinedAlistipesspecies (p:0.047) wereobserved in the post-treatmentmicrobiota of NHL patients. While a decreasewasobserved in IL-12 cytokineexpressionuntilbeforetreatment, an increase in IL-17 cytokineexpressionwasdetected. Discussion: Intestinal flora monitoringafterchemotherapytreatmentshowsthat it can be a guide in thetreatment of thedisease. It is thoughtthatincreasingthediversity of commensalbacteria can alsopositivelyaffecttheprognosis of thedisease.

Keywords: hodgkin lymphoma, non-hodgkin, microbiota, cytokines

Procedia PDF Downloads 90
11100 Investigate the Mechanical Effect of Different Root Analogue Models to Soil Strength

Authors: Asmaa Al Shafiee, Erdin Ibraim

Abstract:

Stabilizing slopes by using vegetation is considered as a cost-effective and eco-friendly alternative to the conventional methods. The main aim of this study is to investigate the mechanical effect of analogue root systems on the shear strength of different soil types. Three objectives were defined to achieve the main aim of this paper. Firstly, explore the effect of root architectural design to shear strength parameters. Secondly, study the effect of root area ratio (RAR) on the shear strength of two different soil types. Finally, to investigate how different kinds of soil can affect the behavior of the roots during shear failure. 3D printing tool was used to develop different analogue tap root models with different architectural designs. Direct shear tests were performed on Leighton Buzzard (LB) fraction B sand, which represents a coarse sand and Huston sand, which represent medium-coarse sand. All tests were done with the same relative density for both kinds of sand. The results of the direct shear test indicated that using plant roots will increase both friction angle and cohesion of soil. Additionally, different root designs affected differently the shear strength of the soil. Furthermore, the directly proportional relationship was found between root area ratio for the same root design and shear strength parameters of soil. Finally, the root area ratio effect should be combined with branches penetrating the shear plane to get the highest results.

Keywords: leighton buzzard sand, root area ratio, rooted soil, shear strength, slope stabilization

Procedia PDF Downloads 136
11099 Use of Front-Face Fluorescence Spectroscopy and Multiway Analysis for the Prediction of Olive Oil Quality Features

Authors: Omar Dib, Rita Yaacoub, Luc Eveleigh, Nathalie Locquet, Hussein Dib, Ali Bassal, Christophe B. Y. Cordella

Abstract:

The potential of front-face fluorescence coupled with chemometric techniques, namely parallel factor analysis (PARAFAC) and multiple linear regression (MLR) as a rapid analysis tool to characterize Lebanese virgin olive oils was investigated. Fluorescence fingerprints were acquired directly on 102 Lebanese virgin olive oil samples in the range of 280-540 nm in excitation and 280-700 nm in emission. A PARAFAC model with seven components was considered optimal with a residual of 99.64% and core consistency value of 78.65. The model revealed seven main fluorescence profiles in olive oil and was mainly associated with tocopherols, polyphenols, chlorophyllic compounds and oxidation/hydrolysis products. 23 MLR regression models based on PARAFAC scores were generated, the majority of which showed a good correlation coefficient (R > 0.7 for 12 predicted variables), thus satisfactory prediction performances. Acid values, peroxide values, and Delta K had the models with the highest predictions, with R values of 0.89, 0.84 and 0.81 respectively. Among fatty acids, linoleic and oleic acids were also highly predicted with R values of 0.8 and 0.76, respectively. Factors contributing to the model's construction were related to common fluorophores found in olive oil, mainly chlorophyll, polyphenols, and oxidation products. This study demonstrates the interest of front-face fluorescence as a promising tool for quality control of Lebanese virgin olive oils.

Keywords: front-face fluorescence, Lebanese virgin olive oils, multiple Linear regressions, PARAFAC analysis

Procedia PDF Downloads 438
11098 Effect of Hydraulic Diameter on Flow Boiling Instability in a Single Microtube with Vertical Upward Flow

Authors: Qian You, Ibrahim Hassan, Lyes Kadem

Abstract:

An experiment is conducted to fundamentally investigate flow oscillation characteristics in different sizes of single microtubes in vertical upward flow direction. Three microtubes have 0.889 mm, 0.533 mm, and 0.305 mm hydraulic diameters with 100 mm identical heated length. The mass flux of the working fluid FC-72 varies from 700 kg/m2•s to 1400 kg/m2•s, and the heat flux is uniformly applied on the tube surface up to 9.4 W/cm2. The subcooled inlet temperature is maintained around 24°C during the experiment. The effect of hydraulic diameter and mass flux are studied. The results showed that they have interactions on the flow oscillations occurrence and behaviors. The onset of flow instability (OFI), which is a threshold of unstable flow, usually appears in large microtube with diversified and sustained flow oscillations, while the transient point, which is the point when the flow turns from one stable state to another suddenly, is more observed in small microtube without characterized flow oscillations due to the bubble confinement. The OFI/transient point occurs early as hydraulic diameter reduces at a given mass flux. The increased mass flux can delay the OFI/transient point occurrence in large hydraulic diameter, but no significant effect in small size. Although the only transient point is observed in the smallest tube, it appears at small heat flux and is not sensitive to mass flux; hence, the smallest microtube is not recommended since increasing heat flux may cause local dryout.

Keywords: flow boiling instability, hydraulic diameter effect, a single microtube, vertical upward flow

Procedia PDF Downloads 580
11097 Administrative Supervision of Local Authorities’ Activities in Selected European Countries

Authors: Alina Murtishcheva

Abstract:

The development of an effective system of administrative supervision is a prerequisite for the functioning of local self-government on the basis of the rule of law. Administrative supervision of local self-government is of particular importance in the EU countries due to the influence of integration processes. The central authorities act on the international level; however, subnational authorities also have to implement European legislation in order to strengthen integration. Therefore, the central authority, being the connecting link between supranational and subnational authorities, should bear responsibility, including financial responsibility, for possible mistakes of subnational authorities. Consequently, the state should have sufficient mechanisms of control over local and regional authorities in order to correct their mistakes. At the same time, the control mechanisms do not deny the autonomy of local self-government. The paper analyses models of administrative supervision of local self-government in Ukraine, Poland, Lithuania, Belgium, Great Britain, Italy, and France. The research methods used in this paper are theoretical methods of analysis of scientific literature, constitutions, legal acts, Congress of Local and Regional Authorities of the Council of Europe reports, and constitutional court decisions, as well as comparative and logical analysis. The legislative basis of administrative supervision was scrutinized, and the models of administrative supervision were classified, including a priori control and ex-post control or their combination. The advantages and disadvantages of these models of administrative supervision are analysed. Compliance with Article 8 of the European Charter of Local Self-Government is of great importance for countries achieving common goals and sharing common values. However, countries under study have problems and, in some cases, demonstrate non-compliance with provisions of Article 8. Such non-conformity as the endorsement of a mayor by the Flemish Government in Belgium, supervision with a view to expediency in Great Britain, and the tendency to overuse supervisory power in Poland are analysed. On the basis of research, the tendencies of administrative supervision of local authorities’ activities in selected European countries are described. Several recommendations for Ukraine as a country that had been granted the EU candidate status are formulated. Having emphasised its willingness to become a member of the European community, Ukraine should not only follow the best European practices but also avoid the mistakes of countries that have long-term experience in developing the local self-government institution. This project has received funding from the Research Council of Lithuania (LMTLT), agreement № P-PD-22-194

Keywords: administrative supervision, decentralisation, legality, local authorities, local self-government

Procedia PDF Downloads 46
11096 The Role of Artificial Intelligence in Creating Personalized Health Content for Elderly People: A Systematic Review Study

Authors: Mahnaz Khalafehnilsaz, Rozina Rahnama

Abstract:

Introduction: The elderly population is growing rapidly, and with this growth comes an increased demand for healthcare services. Artificial intelligence (AI) has the potential to revolutionize the delivery of healthcare services to the elderly population. In this study, the various ways in which AI is used to create health content for elderly people and its transformative impact on the healthcare industry will be explored. Method: A systematic review of the literature was conducted to identify studies that have investigated the role of AI in creating health content specifically for elderly people. Several databases, including PubMed, Scopus, and Web of Science, were searched for relevant articles published between 2000 and 2022. The search strategy employed a combination of keywords related to AI, personalized health content, and the elderly. Studies that utilized AI to create health content for elderly individuals were included, while those that did not meet the inclusion criteria were excluded. A total of 20 articles that met the inclusion criteria were identified. Finding: The findings of this review highlight the diverse applications of AI in creating health content for elderly people. One significant application is the use of natural language processing (NLP), which involves the creation of chatbots and virtual assistants capable of providing personalized health information and advice to elderly patients. AI is also utilized in the field of medical imaging, where algorithms analyze medical images such as X-rays, CT scans, and MRIs to detect diseases and abnormalities. Additionally, AI enables the development of personalized health content for elderly patients by analyzing large amounts of patient data to identify patterns and trends that can inform healthcare providers in developing tailored treatment plans. Conclusion: AI is transforming the healthcare industry by providing a wide range of applications that can improve patient outcomes and reduce healthcare costs. From creating chatbots and virtual assistants to analyzing medical images and developing personalized treatment plans, AI is revolutionizing the way healthcare is delivered to elderly patients. Continued investment in this field is essential to ensure that elderly patients receive the best possible care.

Keywords: artificial intelligence, health content, older adult, healthcare

Procedia PDF Downloads 49
11095 Cognitive and Behavioral Disorders in Patients with Precuneal Infarcts

Authors: F. Ece Cetin, H. Nezih Ozdemir, Emre Kumral

Abstract:

Ischemic stroke of the precuneal cortex (PC) alone is extremely rare. This study aims to evaluate the clinical, neurocognitive, and behavioural characteristics of isolated PC infarcts. We assessed neuropsychological and behavioral findings in 12 patients with isolated PC infarct among 3800 patients with ischemic stroke. To determine the most frequently affected brain locus in patients, we first overlapped the ischemic area of patients with specific cognitive disorders and patients without specific cognitive disorders. Secondly, we compared both overlap maps using the 'subtraction plot' function of MRIcroGL. Patients showed various types of cognitive disorders. All patients experienced more than one category of cognitive disorder, except for two patients with only one cognitive disorder. Lesion topographical analysis showed that damage within the anterior precuneal region might lead to consciousness disorders (25%), self-processing impairment (42%), visuospatial disorders (58%), and lesions in the posterior precuneal region caused episodic and semantic memory impairment (33%). The whole precuneus is involved in at least one body awareness disorder. The cause of the stroke was cardioembolism in 5 patients (42%), large artery disease in 3 (25%), and unknown in 4 (33%). This study showed a wide variety of neuropsychological and behavioural disorders in patients with precuneal infarct. Future studies are needed to achieve a proper definition of the function of the precuneus in relation to the extended cortical areas. Precuneal cortex region infarcts have been found to predict a source of embolism from the large arteries or heart.

Keywords: cognition, pericallosal artery, precuneal cortex, ischemic stroke

Procedia PDF Downloads 119
11094 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents

Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty

Abstract:

A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.

Keywords: abstractive summarization, deep learning, natural language Processing, patent document

Procedia PDF Downloads 110
11093 Simulation of Multistage Extraction Process of Co-Ni Separation Using Ionic Liquids

Authors: Hongyan Chen, Megan Jobson, Andrew J. Masters, Maria Gonzalez-Miquel, Simon Halstead, Mayri Diaz de Rienzo

Abstract:

Ionic liquids offer excellent advantages over conventional solvents for industrial extraction of metals from aqueous solutions, where such extraction processes bring opportunities for recovery, reuse, and recycling of valuable resources and more sustainable production pathways. Recent research on the use of ionic liquids for extraction confirms their high selectivity and low volatility, but there is relatively little focus on how their properties can be best exploited in practice. This work addresses gaps in research on process modelling and simulation, to support development, design, and optimisation of these processes, focusing on the separation of the highly similar transition metals, cobalt, and nickel. The study exploits published experimental results, as well as new experimental results, relating to the separation of Co and Ni using trihexyl (tetradecyl) phosphonium chloride. This extraction agent is attractive because it is cheaper, more stable and less toxic than fluorinated hydrophobic ionic liquids. This process modelling work concerns selection and/or development of suitable models for the physical properties, distribution coefficients, for mass transfer phenomena, of the extractor unit and of the multi-stage extraction flowsheet. The distribution coefficient model for cobalt and HCl represents an anion exchange mechanism, supported by the literature and COSMO-RS calculations. Parameters of the distribution coefficient models are estimated by fitting the model to published experimental extraction equilibrium results. The mass transfer model applies Newman’s hard sphere model. Diffusion coefficients in the aqueous phase are obtained from the literature, while diffusion coefficients in the ionic liquid phase are fitted to dynamic experimental results. The mass transfer area is calculated from the surface to mean diameter of liquid droplets of the dispersed phase, estimated from the Weber number inside the extractor. New experiments measure the interfacial tension between the aqueous and ionic phases. The empirical models for predicting the density and viscosity of solutions under different metal loadings are also fitted to new experimental data. The extractor is modelled as a continuous stirred tank reactor with mass transfer between the two phases and perfect phase separation of the outlet flows. A multistage separation flowsheet simulation is set up to replicate a published experiment and compare model predictions with the experimental results. This simulation model is implemented in gPROMS software for dynamic process simulation. The results of single stage and multi-stage flowsheet simulations are shown to be in good agreement with the published experimental results. The estimated diffusion coefficient of cobalt in the ionic liquid phase is in reasonable agreement with published data for the diffusion coefficients of various metals in this ionic liquid. A sensitivity study with this simulation model demonstrates the usefulness of the models for process design. The simulation approach has potential to be extended to account for other metals, acids, and solvents for process development, design, and optimisation of extraction processes applying ionic liquids for metals separations, although a lack of experimental data is currently limiting the accuracy of models within the whole framework. Future work will focus on process development more generally and on extractive separation of rare earths using ionic liquids.

Keywords: distribution coefficient, mass transfer, COSMO-RS, flowsheet simulation, phosphonium

Procedia PDF Downloads 176
11092 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 111
11091 A Kinetic Study of Radical Polymerization of Acrylic Monomers in the Presence of the Liquid Crystal and the Electro-Optical Properties of These Mixtures

Authors: A. Bouriche, D. Merah, L.Alachaher-Bedjaoui, U. Maschke

Abstract:

Intensive research continues in the field of liquid crystals (LCs) for their potential use in modern display applications. Nematic LCs has been most commonly used due to the large birefringence and their sensitivity to even weak perturbation forces induced by electric, magnetic and optical fields. Polymer dispersed liquid crystals (PDLCs), composed of micron-sized nematic LC droplets dispersed in a polymer matrix is an important class of materials for applications in different domains of technology involving large area display devices, optical switches, phase modulators, variable attenuators, polarisers, flexible displays and smart windows. In this study the composites are prepared from mixtures of monofunctional acrylic monomers, (Butylacrylate (ABu), 2-Ethylhexylacrylate (2-EHA), 2-Hydroxyethyl methacrylate (HEMA) and hydroxybutylmethacrylate (HBMA)) and two liquid crystals: (4-cyano-4'-n-pentyl-biphenyl) (5CB) and E7 which is an eutectic mixtures of four cyanoparaphenylenes. These mixtures are prepared adding the Darocur 1173 as photoinitiateor, the 1.6-hexanediol diacrylate (HDDA) as cross-linker agent, and finally they are exposed to UV irradiation. The kinetic polymerization of monomer/LC mixture were investigated with the Fourier Transform Infra Red spectroscopy (FTIR). The electro-optical properties of the PDLC films were determined by measuring the voltage dependence on the transmitted light.

Keywords: acrylic monomers, films PDLC, liquid crystal, polymerisation

Procedia PDF Downloads 317
11090 Factors Influencing Soil Organic Carbon Storage Estimation in Agricultural Soils: A Machine Learning Approach Using Remote Sensing Data Integration

Authors: O. Sunantha, S. Zhenfeng, S. Phattraporn, A. Zeeshan

Abstract:

The decline of soil organic carbon (SOC) in global agriculture is a critical issue requiring rapid and accurate estimation for informed policymaking. Previous studies have demonstrated the variability of SOC predictors derived from remote sensing data and environmental variables. However, the specific parameters most suitable for accurately estimating SOC in agricultural areas remain unclear. This study utilizes remote sensing data to precisely estimate SOC and identify influential factors in diverse agricultural areas, such as paddy, corn, sugarcane, cassava, and perennial crops. Extreme gradient boosting (XGBoost), random forest (RF), and support vector regression (SVR) models are employed to analyze these factors impact on SOC estimation. The results show key factors influencing SOC estimation include slope, vegetation indices (EVI), spectral reflectance indices (red index, red edge2), temperature, land use, and surface soil moisture, as indicated by their averaged importance scores across XGBoost, RF, and SVR models. Therefore, using different machine learning algorithms for SOC estimation reveals varying influential factors from remote sensing data and environmental variables. This approach emphasizes feature selection, as different machine learning algorithms identify various key factors from remote sensing data and environmental variables for accurate SOC estimation.

Keywords: factors influencing SOC estimation, remote sensing data, environmental variables, machine learning

Procedia PDF Downloads 5
11089 Enhanced Magnetoelastic Response near Morphotropic Phase Boundary in Ferromagnetic Materials: Experimental and Theoretical Analysis

Authors: Murtaza Adil, Sen Yang, Zhou Chao, Song Xiaoping

Abstract:

The morphotropic phase boundary (MPB) recently has attracted constant interest in ferromagnetic systems for obtaining enhanced large magnetoelastic response. In the present study, structural and magnetoelastic properties of MPB involved ferromagnetic Tb1-xGdxFe2 (0≤x≤1) system has been investigated. The change of easy magnetic direction from <111> to <100> with increasing x up MPB composition of x=0.9 is detected by step-scanned [440] synchrotron X-ray diffraction reflections. The Gd substitution for Tb changes the composition for the anisotropy compensation near MPB composition of x=0.9, which was confirmed by the analysis of detailed scanned XRD, magnetization curves and the calculation of the first anisotropy constant K1. The spin configuration diagram accompanied with different crystal structures for Tb1-xGdxFe2 was designed. The calculated first anisotropy constant K1 shows a minimum value at MPB composition of x=0.9. In addition, the large ratio between magnetostriction, and the absolute values of the first anisotropy constant │λS∕K1│ appears at MPB composition, which makes it a potential material for magnetostrictive application. Based on experimental results, a theoretically approach was also proposed to signify that the facilitated magnetization rotation and enhanced magnetoelastic effect near MPB composition are a consequence of the anisotropic flattening of free energy of ferromagnetic crystal. Our work specifies the universal existence of MPB in ferromagnetic materials which is important for substantial improvement of magnetic and magnetostrictive properties and may provide a new route to develop advanced functional materials.

Keywords: free energy, magnetic anisotropy, magnetostriction, morphotropic phase boundary (MPB)

Procedia PDF Downloads 265
11088 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises

Procedia PDF Downloads 236
11087 Influence of Single and Multiple Skin-Core Debonding on Free Vibration Characteristics of Innovative GFRP Sandwich Panels

Authors: Indunil Jayatilake, Warna Karunasena, Weena Lokuge

Abstract:

An Australian manufacturer has fabricated an innovative GFRP sandwich panel made from E-glass fiber skin and a modified phenolic core for structural applications. Debonding, which refers to separation of skin from the core material in composite sandwiches, is one of the most common types of damage in composites. The presence of debonding is of great concern because it not only severely affects the stiffness but also modifies the dynamic behaviour of the structure. Generally, it is seen that the majority of research carried out has been concerned about the delamination of laminated structures whereas skin-core debonding has received relatively minor attention. Furthermore, it is observed that research done on composite slabs having multiple skin-core debonding is very limited. To address this gap, a comprehensive research investigating dynamic behaviour of composite panels with single and multiple debonding is presented. The study uses finite-element modelling and analyses for investigating the influence of debonding on free vibration behaviour of single and multilayer composite sandwich panels. A broad parametric investigation has been carried out by varying debonding locations, debonding sizes and support conditions of the panels in view of both single and multiple debonding. Numerical models were developed with Strand7 finite element package by innovatively selecting the suitable elements to diligently represent their actual behavior. Three-dimensional finite element models were employed to simulate the physically real situation as close as possible, with the use of an experimentally and numerically validated finite element model. Comparative results and conclusions based on the analyses are presented. For similar extents and locations of debonding, the effect of debonding on natural frequencies appears greatly dependent on the end conditions of the panel, giving greater decrease in natural frequency when the panels are more restrained. Some modes are more sensitive to debonding and this sensitivity seems to be related to their vibration mode shapes. The fundamental mode seems generally the least sensitive mode to debonding with respect to the variation in free vibration characteristics. The results indicate the effectiveness of the developed three-dimensional finite element models in assessing debonding damage in composite sandwich panels

Keywords: debonding, free vibration behaviour, GFRP sandwich panels, three dimensional finite element modelling

Procedia PDF Downloads 298
11086 Awareness and Attitudes of Primary Grade Teachers (1-4th Grade) Towards Inclusive Education

Authors: Maheshwari Payal, Shapurkar Mayaan

Abstract:

The present research aimed at studying the awareness and attitudes of teachers towards inclusive education. The sample consisted of 60 teachers, teaching in the primary section (1st – 4th) of regular schools affiliated to the SSC board in Mumbai. The sample was selected by Multi-stage cluster sampling technique. A semi-structured self-constructed interview schedule and a self-constructed attitude scale were used to study the awareness of teachers about disability and Inclusive education, and their attitudes towards inclusive education respectively. Themes were extracted from the interview data and quantitative data was analyzed using SPSS package. Results revealed that teachers had some amount of awareness but an inadequate amount of information on disabilities and inclusive education. Disability to most (37) teachers meant “an inability to do something”. The difference between disability and handicap was stated by most as former being cognitive while handicap being physical in nature. With regard to Inclusive education, a large number (46) stated that they were unaware of the term and did not know what it meant. The majority (52) of them perceived maximum challenges for themselves in an inclusive set up, and emphasized on the role of teacher training courses in the area of providing knowledge (49) and training in teaching methodology (53). Although, 83.3% of teachers held a moderately positive attitude towards inclusive education, a large percentage (61.6%) of participants felt that being in inclusive set up would be very challenging for both children with special needs and without special needs. Though, most (49) of the teachers stated that children with special needs should be educated in a regular classroom, but they further clarified that only those should be in a regular classroom who have physical impairments of mild or moderate degree.

Keywords: attitude, awareness, inclusive education, teachers

Procedia PDF Downloads 306
11085 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization

Authors: Younis Elhaddad, Alfonso Ortega

Abstract:

Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.

Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production

Procedia PDF Downloads 150
11084 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 80