Search results for: large deviation
7657 3D Stereoscopic Measurements from AR Drone Squadron
Authors: R. Schurig, T. Désesquelles, A. Dumont, E. Lefranc, A. Lux
Abstract:
A cost-efficient alternative is proposed to the use of a single drone carrying multiple cameras in order to take stereoscopic images and videos during its flight. Such drone has to be particularly large enough to take off with its equipment, and stable enough in order to make valid measurements. Corresponding performance for a single aircraft usually comes with a large cost. Proposed solution consists in using multiple smaller and cheaper aircrafts carrying one camera each instead of a single expensive one. To give a proof of concept, AR drones, quad-rotor UAVs from Parrot Inc., are experimentally used.Keywords: drone squadron, flight control, rotorcraft, Unmanned Aerial Vehicle (UAV), AR drone, stereoscopic vision
Procedia PDF Downloads 4727656 A Comparative Assessment of the FoodSupply Vulnerability to Large-Scale Disasters in OECD Countries
Authors: Karolin Bauer, Anna Brinkmann
Abstract:
Vulnerabilities in critical infrastructure can cause significant difficulties for the affected population during crises. Securing the food supply as part of the critical infrastructure in crisis situations is an essential part of public services and a ground stone for a successful concept of civil protection. In most industrialized countries, there are currently no comparative studies regarding the food supply of the population during crisis and disaster events. In order to mitigate the potential impact in case of major disasters in Germany, it is absolutely necessary to investigate how the food supply can be secured. The research project aims to provide in-depth research on the experiences gathered during past large-scale disasters in the 34 OECD member countries in order to discover alternatives for an updated civil protection system in Germany. The basic research question is: "Which international approaches and structures of civil protection have been proven and would be useful to modernize the German civil protection with regards to the critical infrastructure and food supply?" Research findings should be extracted from an extensive literature review covering the entire research period as well as from personal and online-based interviews with experts and responsible persons from involved institutions. The capability of the research project insists on the deliberate choice to investigate previous large-scale disasters to formulate important and practical approaches to modernize civil protection in Germany.Keywords: food supply, vulnerabilty, critical infratstructure, large-scale disaster
Procedia PDF Downloads 3367655 An Automatic Large Classroom Attendance Conceptual Model Using Face Counting
Authors: Sirajdin Olagoke Adeshina, Haidi Ibrahim, Akeem Salawu
Abstract:
large lecture theatres cannot be covered by a single camera but rather by a multicamera setup because of their size, shape, and seating arrangements. Although, classroom capture is achievable through a single camera. Therefore, a design and implementation of a multicamera setup for a large lecture hall were considered. Researchers have shown emphasis on the impact of class attendance taken on the academic performance of students. However, the traditional method of carrying out this exercise is below standard, especially for large lecture theatres, because of the student population, the time required, sophistication, exhaustiveness, and manipulative influence. An automated large classroom attendance system is, therefore, imperative. The common approach in this system is face detection and recognition, where known student faces are captured and stored for recognition purposes. This approach will require constant face database updates due to constant changes in the facial features. Alternatively, face counting can be performed by cropping the localized faces on the video or image into a folder and then count them. This research aims to develop a face localization-based approach to detect student faces in classroom images captured using a multicamera setup. A selected Haar-like feature cascade face detector trained with an asymmetric goal to minimize the False Rejection Rate (FRR) relative to the False Acceptance Rate (FAR) was applied on Raspberry Pi 4B. A relationship between the two factors (FRR and FAR) was established using a constant (λ) as a trade-off between the two factors for automatic adjustment during training. An evaluation of the proposed approach and the conventional AdaBoost on classroom datasets shows an improvement of 8% TPR (output result of low FRR) and 7% minimization of the FRR. The average learning speed of the proposed approach was improved with 1.19s execution time per image compared to 2.38s of the improved AdaBoost. Consequently, the proposed approach achieved 97% TPR with an overhead constraint time of 22.9s compared to 46.7s of the improved Adaboost when evaluated on images obtained from a large lecture hall (DK5) USM.Keywords: automatic attendance, face detection, haar-like cascade, manual attendance
Procedia PDF Downloads 717654 A Safety-Door for Earthquake Disaster Prevention - Part II
Authors: Daniel Y. Abebe, Jaehyouk Choi
Abstract:
The safety of door has not given much attention. The main problem of doors during and after earthquake is that they are unable to be opened because deviation from its original position by the lateral load. The aim of this research is to develop and evaluate a safety door that keeps the door frame in its original position or keeps its edge angles perpendicular during and post-earthquake. Nonlinear finite element analysis was conducted in order to evaluate the structural performance and behavior of the proposed door under both monotonic and cyclic loading.Keywords: safety-door, earthquake disaster, low yield point steel, passive energy dissipating device, FE analysis
Procedia PDF Downloads 4737653 Social Media and the Future of Veganism Influence on Gender Norms
Authors: Athena Johnson
Abstract:
Veganism has seen a rapid increase in members over recent years. Understanding the mechanisms of social change associated with these dietary practices in relation to gender is significant as these groups may seem small, but they have a large impact as they influence many and change the food market. This research article's basic methodology is primarily a deep article research literature review with empirical research. The research findings show that the popularity of veganism is growing, in large part due to the extensive use of social media, which dispels longstanding gendered connotations with food, such as the correlations between meat and masculinity.Keywords: diversity, gender roles, social media, veganism
Procedia PDF Downloads 1137652 EduEasy: Smart Learning Assistant System
Authors: A. Karunasena, P. Bandara, J. A. T. P. Jayasuriya, P. D. Gallage, J. M. S. D. Jayasundara, L. A. P. Y. P. Nuwanjaya
Abstract:
Usage of smart learning concepts has increased rapidly all over the world recently as better teaching and learning methods. Most educational institutes such as universities are experimenting those concepts with their students. Smart learning concepts are especially useful for students to learn better in large classes. In large classes, the lecture method is the most popular method of teaching. In the lecture method, the lecturer presents the content mostly using lecture slides, and the students make their own notes based on the content presented. However, some students may find difficulties with the above method due to various issues such as speed in delivery. The purpose of this research is to assist students in large classes in the following content. The research proposes a solution with four components, namely note-taker, slide matcher, reference finder, and question presenter, which are helpful for the students to obtain a summarized version of the lecture note, easily navigate to the content and find resources, and revise content using questions.Keywords: automatic summarization, extractive text summarization, speech recognition library, sentence extraction, automatic web search, automatic question generator, sentence scoring, the term weight
Procedia PDF Downloads 1467651 An Association Model to Correlate the Experimentally Determined Mixture Solubilities of Methyl 10-Undecenoate with Methyl Ricinoleate in Supercritical Carbon Dioxide
Authors: V. Mani Rathnam, Giridhar Madras
Abstract:
Fossil fuels are depleting rapidly as the demand for energy, and its allied chemicals are continuously increasing in the modern world. Therefore, sustainable renewable energy sources based on non-edible oils are being explored as a viable option as they do not compete with the food commodities. Oils such as castor oil are rich in fatty acids and thus can be used for the synthesis of biodiesel, bio-lubricants, and many other fine industrial chemicals. There are several processes available for the synthesis of different chemicals obtained from the castor oil. One such process is the transesterification of castor oil, which results in a mixture of fatty acid methyl esters. The main products in the above reaction are methyl ricinoleate and methyl 10-undecenoate. To separate these compounds, supercritical carbon dioxide (SCCO₂) was used as a green solvent. SCCO₂ was chosen as a solvent due to its easy availability, non-toxic, non-flammable, and low cost. In order to design any separation process, the preliminary requirement is the solubility or phase equilibrium data. Therefore, the solubility of a mixture of methyl ricinoleate with methyl 10-undecenoate in SCCO₂ was determined in the present study. The temperature and pressure range selected for the investigation were T = 313 K to 333 K and P = 10 MPa to 18 MPa. It was observed that the solubility (mol·mol⁻¹) of methyl 10-undecenoate varied from 2.44 x 10⁻³ to 8.42 x 10⁻³ whereas it varied from 0.203 x 10⁻³ to 6.28 x 10⁻³ for methyl ricinoleate within the chosen operating conditions. These solubilities followed a retrograde behavior (characterized by the decrease in the solubility values with the increase in temperature) throughout the range of investigated operating conditions. An association theory model, coupled with regular solution theory for activity coefficients, was developed in the present study. The deviation from the experimental data using this model can be quantified using the average absolute relative deviation (AARD). The AARD% for the present compounds is 4.69 and 8.08 for methyl 10-undecenoate and methyl ricinoleate, respectively in a mixture of methyl ricinoleate and methyl 10-undecenoate. The maximum solubility enhancement of 32% was observed for the methyl ricinoleate in a mixture of methyl ricinoleate and methyl 10-undecenoate. The highest selectivity of SCCO₂ was observed to be 12 for methyl 10-undecenoate in a mixture of methyl ricinoleate and methyl 10-undecenoate.Keywords: association theory, liquid mixtures, solubilities, supercritical carbon dioxide
Procedia PDF Downloads 1347650 Energy Budget Equation of Superfluid HVBK Model: LES Simulation
Authors: M. Bakhtaoui, L. Merahi
Abstract:
The reliability of the filtered HVBK model is now investigated via some large eddy simulations of freely decaying isotropic superfluid turbulence. For homogeneous turbulence at very high Reynolds numbers, comparison of the terms in the spectral kinetic energy budget equation indicates, in the energy-containing range, that the production and energy transfer effects become significant except for dissipation. In the inertial range, where the two fluids are perfectly locked, the mutual friction maybe neglected with respect to other terms. Also the LES results for the other terms of the energy balance are presented.Keywords: superfluid turbulence, HVBK, energy budget, Large Eddy Simulation
Procedia PDF Downloads 3747649 Real-Time Big-Data Warehouse a Next-Generation Enterprise Data Warehouse and Analysis Framework
Authors: Abbas Raza Ali
Abstract:
Big Data technology is gradually becoming a dire need of large enterprises. These enterprises are generating massively large amount of off-line and streaming data in both structured and unstructured formats on daily basis. It is a challenging task to effectively extract useful insights from the large scale datasets, even though sometimes it becomes a technology constraint to manage transactional data history of more than a few months. This paper presents a framework to efficiently manage massively large and complex datasets. The framework has been tested on a communication service provider producing massively large complex streaming data in binary format. The communication industry is bound by the regulators to manage history of their subscribers’ call records where every action of a subscriber generates a record. Also, managing and analyzing transactional data allows service providers to better understand their customers’ behavior, for example, deep packet inspection requires transactional internet usage data to explain internet usage behaviour of the subscribers. However, current relational database systems limit service providers to only maintain history at semantic level which is aggregated at subscriber level. The framework addresses these challenges by leveraging Big Data technology which optimally manages and allows deep analysis of complex datasets. The framework has been applied to offload existing Intelligent Network Mediation and relational Data Warehouse of the service provider on Big Data. The service provider has 50+ million subscriber-base with yearly growth of 7-10%. The end-to-end process takes not more than 10 minutes which involves binary to ASCII decoding of call detail records, stitching of all the interrogations against a call (transformations) and aggregations of all the call records of a subscriber.Keywords: big data, communication service providers, enterprise data warehouse, stream computing, Telco IN Mediation
Procedia PDF Downloads 1757648 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 437647 The Search of Anomalous Higgs Boson Couplings at the Large Hadron Electron Collider and Future Circular Electron Hadron Collider
Authors: Ilkay Turk Cakir, Murat Altinli, Zekeriya Uysal, Abdulkadir Senol, Olcay Bolukbasi Yalcinkaya, Ali Yilmaz
Abstract:
The Higgs boson was discovered by the ATLAS and CMS experimental groups in 2012 at the Large Hadron Collider (LHC). Production and decay properties of the Higgs boson, Standard Model (SM) couplings, and limits on effective scale of the Higgs boson’s couplings with other bosons are investigated at particle colliders. Deviations from SM estimates are parametrized by effective Lagrangian terms to investigate Higgs couplings. This is a model-independent method for describing the new physics. In this study, sensitivity to neutral gauge boson anomalous couplings with the Higgs boson is investigated using the parameters of the Large Hadron electron Collider (LHeC) and the Future Circular electron-hadron Collider (FCC-eh) with a model-independent approach. By using MadGraph5_aMC@NLO multi-purpose event generator with the parameters of LHeC and FCC-eh, the bounds on the anomalous Hγγ, HγZ and HZZ couplings in e− p → e− q H process are obtained. Detector simulations are also taken into account in the calculations.Keywords: anomalos couplings, FCC-eh, Higgs, Z boson
Procedia PDF Downloads 2107646 Shape Management Method of Large Structure Based on Octree Space Partitioning
Authors: Gichun Cha, Changgil Lee, Seunghee Park
Abstract:
The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning
Procedia PDF Downloads 2977645 Understanding and Political Participation in Constitutional Monarchy of Dusit District Residents
Authors: Sudaporn Arundee
Abstract:
The purposes of this research were to study in three areas: (1) to study political understanding and participating of the constitutional monarchy, (2) to study the level of participation. This paper drew upon data collected from 395 Dusit residents by using questionnaire. In addition, a simple random sampling was utilized to collect data. The findings revealed that 94 percent of respondents had a very good understanding of constitution monarchy with a mean of 4.8. However, the respondents overall had a very low level of participation with the mean score of 1.69 and standard deviation of .719.Keywords: political participation, constitutional monarchy, management and social sciences
Procedia PDF Downloads 2517644 Determination of Four Anions in the Ground Layer of Tomb Murals by Ion Chromatography
Authors: Liping Qiu, Xiaofeng Zhang
Abstract:
The ion chromatography method for the rapid determination of four anions (F⁻、Cl⁻、SO₄²⁻、NO₃⁻) in burial ground poles was optimized. The L₉(₃⁴) orthogonal test was used to determine the optimal parameters of sample pretreatment: accurately weigh 2.000g of sample, add 10mL of ultrapure water, and extract for 40min under the conditions of shaking temperature 40℃ and shaking speed 180 r·min-1. The eluent was 25 mmol/L KOH solution, the analytical column was Ion Pac® AS11-SH (250 mm × 4.0 mm), and the purified filtrate was measured by a conductivity detector. Under this method, the detection limit of each ion is 0.066~0.078mg/kg, the relative standard deviation is 0.86%~2.44% (n=7), and the recovery rate is 94.6~101.9.Keywords: ion chromatography, tomb, anion (F⁻, Cl⁻, SO₄²⁻, NO₃⁻), environmental protection
Procedia PDF Downloads 1027643 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units
Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro
Abstract:
In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.Keywords: capacitated clustering, k-means, genetic algorithm, districting problems
Procedia PDF Downloads 1977642 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays
Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal
Abstract:
Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).Keywords: fault tolerance, FPGA, single event upset, approximate computing
Procedia PDF Downloads 1987641 Distribution of Micro Silica Powder at a Ready Mixed Concrete
Authors: Kyong-Ku Yun, Dae-Ae Kim, Kyeo-Re Lee, Kyong Namkung, Seung-Yeon Han
Abstract:
Micro silica is collected as a by-product of the silicon and ferrosilicon alloy production in electric arc furnace using highly pure quartz, wood chips, coke and the like. It consists of about 85% of silicon which has spherical particles with an average particle size of 150 μm. The bulk density of micro silica varies from 150 to 700kg/m^3 and the fineness ranges from 150,000 to 300,000cm^2/g. An amorphous structure with a high silicon oxide content of micro silica induces an active reaction with calcium hydroxide (Ca(OH)₂) generated by the cement hydrate of a large surface area (about 20 m^² / g), and they are also known to form calcium, silicate, hydrate conjugate (C-S-H). Micro silica tends to act as a filler because of the fine particles and the spherical shape. These particles do not get covered by water and they fit well in the space between the relatively rough cement grains which does not freely fluidize concrete. On the contrary, water demand increases since micro silica particles have a tendency to absorb water because of the large surface area. The overall effect of micro silica depends on the amount of micro silica added with other parameters in the water-(cement + micro silica) ratio, and the availability of superplasticizer. In this research, it was studied on cellular sprayed concrete. This method involves a direct re-production of ready mixed concrete into a high performance at a job site. It could reduce the cost of construction by an adding a cellular and a micro silica into a ready mixed concrete truck in a field. Also, micro silica which is difficult with mixing due to high fineness in the field can be added and dispersed in concrete by increasing the fluidity of ready mixed concrete through the surface activity of cellular. Increased air content is converged to a certain level of air content by spraying and it also produces high-performance concrete by remixing of powders in the process of spraying. As it does not use a field mixing equipment the cost of construction decrease and it can be constructed after installing special spray machine in a commercial pump car. Therefore, use of special equipment is minimized, providing economic feasibility through the utilization of existing equipment. This study was carried out to evaluate a highly reliable method of confirming dispersion through a high performance cellular sprayed concrete. A mixture of 25mm coarse aggregate and river sand was applied to the concrete. In addition, by applying silica fume and foam, silica fume dispersion is confirmed in accordance with foam mixing, and the mean and standard deviation is obtained. Then variation coefficient is calculated to finally evaluate the dispersion. Comparison and analysis of before and after spraying were conducted on the experiment variables of 21L, 35L foam for each 7%, 14% silica fume respectively. Taking foam and silica fume as variables, the experiment proceed. Casting a specimen for each variable, a five-day sample is taken from each specimen for EDS test. In this study, it was examined by an experiment materials, plan and mix design, test methods, and equipment, for the evaluation of dispersion in accordance with micro silica and foam.Keywords: micro silica, distribution, ready mixed concrete, foam
Procedia PDF Downloads 2187640 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation
Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau
Abstract:
In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa
Procedia PDF Downloads 1557639 Using the Semantic Web Technologies to Bring Adaptability in E-Learning Systems
Authors: Fatima Faiza Ahmed, Syed Farrukh Hussain
Abstract:
The last few decades have seen a large proportion of our population bending towards e-learning technologies, starting from learning tools used in primary and elementary schools to competency based e-learning systems specifically designed for applications like finance and marketing. The huge diversity in this crowd brings about a large number of challenges for the designers of these e-learning systems, one of which is the adaptability of such systems. This paper focuses on adaptability in the learning material in an e-learning course and how artificial intelligence and the semantic web can be used as an effective tool for this purpose. The study proved that the semantic web, still a hot topic in the area of computer science can prove to be a powerful tool in designing and implementing adaptable e-learning systems.Keywords: adaptable e-learning, HTMLParser, information extraction, semantic web
Procedia PDF Downloads 3387638 Exploring the Sources of Innovation in Food Processing SMEs of Kerala
Authors: Bhumika Gupta, Jeayaram Subramanian, Hardik Vachhrajani, Avinash Shivdas
Abstract:
Indian food processing industry is one of the largest in the world in terms of production, consumption, exports and growth opportunities. SMEs play a crucial role within this. Large manufacturing firms largely dominate innovation studies in India. Innovation sources used by SMEs are often different from that of large firms. This paper focuses on exploring various sources of innovation adopted by food processing SMEs in Kerala, South India. Outcome suggests that SMEs use various sources like suppliers, competitors, employees, government/research institutions and customers to get new ideas.Keywords: food processing, innovation, SMEs, sources of innovation
Procedia PDF Downloads 4167637 Effectiveness of Medication and Non-Medication Therapy on Working Memory of Children with Attention Deficit and Hyperactivity Disorder
Authors: Mohaammad Ahmadpanah, Amineh Akhondi, Mohammad Haghighi, Ali Ghaleiha, Leila Jahangard, Elham Salari
Abstract:
Background: Working memory includes the capability to keep and manipulate information in a short period of time. This capability is the basis of complicated judgments and has been attended to as the specific and constant character of individuals. Children with attention deficit and hyperactivity are among the people suffering from deficiency in the active memory, and this deficiency has been attributed to the problem of frontal lobe. This study utilizes a new approach with suitable tasks and methods for training active memory and assessment of the effects of the trainings. Participants: The children participating in this study were of 7-15 year age, who were diagnosed by the psychiatrist and psychologist as hyperactive and attention deficit based on DSM-IV criteria. The intervention group was consisted of 8 boys and 6 girls with the average age of 11 years and standard deviation of 2, and the control group was consisted of 2 girls and 5 boys with an average age of 11.4 and standard deviation of 3. Three children in the test group and two in the control group were under medicinal therapy. Results: Working memory training meaningfully improved the performance in not-trained areas as visual-spatial working memory as well as the performance in Raven progressive tests which are a perfect example of non-verbal, complicated reasoning tasks. In addition, motional activities – measured based on the number of head movements during computerized measuring program – was meaningfully reduced in the medication group. The results of the second test showed that training similar exercise to teenagers and adults results in the improvement of cognition functions, as in hyperactive people. Discussion: The results of this study showed that the performance of working memory is improved through training, and these trainings are extended and generalized in other areas of cognition functions not receiving any training. Trainings resulted in the improvement of performance in the tasks related to prefrontal. They had also a positive and meaningful impact on the moving activities of hyperactive children.Keywords: attention deficit hyperactivity disorder, working memory, non-medical treatment, children
Procedia PDF Downloads 3677636 Evaluation of the Efficacy and Tolerance of Gabapentin in the Treatment of Neuropathic Pain
Authors: A. Ibovi Mouondayi, S. Zaher, R. Assadi, K. Erraoui, S. Sboul, J. Daoudim, S. Bousselham, K. Nassar, S. Janani
Abstract:
INTRODUCTION: Neuropathic pain (NP) caused by damage to the somatosensory nervous system has a significant impact on quality of life and is associated with a high economic burden on the individual and society. The treatment of neuropathic pain consists of the use of a wide range of therapeutic agents, including gabapentin, which is used in the treatment of neuropathic pain. OBJECTIF: The objective of this study was to evaluate the efficacy and tolerance of gabapentin in the treatment of neuropathic pain. MATERIAL AND METHOD: This is a monocentric, cross-sectional, descriptive, retrospective study conducted in our department over a period of 19 months from October 2020 to April 2022. The missing parameters were collected during phone calls of the patients concerned. The diagnostic tool adopted was the DN4 questionnaire in the dialectal Arabic version. The impact of NP was assessed by the visual analog scale (VAS) on pain, sleep, and function. The impact of PN on mood was assessed by the "Hospital anxiety, and depression scale HAD" score in the validated Arabic version. The exclusion criteria were patients followed up for depression and other psychiatric pathologies. RESULTS: A total of 67 patients' data were collected. The average age was 64 years (+/- 15 years), with extremes ranging from 26 years to 94 years. 58 women and 9 men with an M/F sex ratio of 0.15. Cervical radiculopathy was found in 21% of this population, and lumbosacral radiculopathy in 61%. Gabapentin was introduced in doses ranging from 300 to 1800 mg per day with an average dose of 864 mg (+/- 346) per day for an average duration of 12.6 months. Before treatment, 93% of patients had a non-restorative sleep quality (VAS>3). 54% of patients had a pain VAS greater than 5. The function was normal in only 9% of patients. The mean anxiety score was 3.25 (standard deviation: 2.70), and the mean HAD depression score was 3.79 (standard deviation: 1.79). After treatment, all patients had improved the quality of their sleep (p<0.0001). A significant difference was noted in pain VAS, function, as well as anxiety and depression, and HAD score. Gabapentin was stopped for side effects (dizziness and drowsiness) and/or unsatisfactory response. CONCLUSION: Our data demonstrate a favorable effect of gabapentin on the management of neuropathic pain with a significant difference before and after treatment on the quality of life of patients associated with an acceptable tolerance profile.Keywords: neuropathic pain, chronic pain, treatment, gabapentin
Procedia PDF Downloads 947635 Reliability-Based Life-Cycle Cost Model for Engineering Systems
Authors: Reza Lotfalian, Sudarshan Martins, Peter Radziszewski
Abstract:
The effect of reliability on life-cycle cost, including initial and maintenance cost of a system is studied. The failure probability of a component is used to calculate the average maintenance cost during the operation cycle of the component. The standard deviation of the life-cycle cost is also calculated as an error measure for the average life-cycle cost. As a numerical example, the model is used to study the average life cycle cost of an electric motor.Keywords: initial cost, life-cycle cost, maintenance cost, reliability
Procedia PDF Downloads 6047634 Hyperspectral Image Classification Using Tree Search Algorithm
Authors: Shreya Pare, Parvin Akhter
Abstract:
Remotely sensing image classification becomes a very challenging task owing to the high dimensionality of hyperspectral images. The pixel-wise classification methods fail to take the spatial structure information of an image. Therefore, to improve the performance of classification, spatial information can be integrated into the classification process. In this paper, the multilevel thresholding algorithm based on a modified fuzzy entropy function is used to perform the segmentation of hyperspectral images. The fuzzy parameters of the MFE function have been optimized by using a new meta-heuristic algorithm based on the Tree-Search algorithm. The segmented image is classified by a large distribution machine (LDM) classifier. Experimental results are shown on a hyperspectral image dataset. The experimental outputs indicate that the proposed technique (MFE-TSA-LDM) achieves much higher classification accuracy for hyperspectral images when compared to state-of-art classification techniques. The proposed algorithm provides accurate segmentation and classification maps, thus becoming more suitable for image classification with large spatial structures.Keywords: classification, hyperspectral images, large distribution margin, modified fuzzy entropy function, multilevel thresholding, tree search algorithm, hyperspectral image classification using tree search algorithm
Procedia PDF Downloads 1777633 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: steepest descent, line search, iteration, running time, unconstrained optimization, convergence
Procedia PDF Downloads 5407632 Dosimetric Comparison of Conventional Plans versus Three Dimensional Conformal Simultaneously Integrated Boost Plans
Authors: Shoukat Ali, Amjad Hussain, Latif-ur-Rehman, Sehrish Inam
Abstract:
Radiotherapy plays an important role in the management of cancer patients. Approximately 50% of the cancer patients receive radiotherapy at one point or another during the course of treatment. The entire radiotherapy treatment of curative intent is divided into different phases, depending on the histology of the tumor. The established protocols are useful in deciding the total dose, fraction size, and numbers of phases. The objective of this study was to evaluate the dosimetric differences between the conventional treatment protocols and the three-dimensional conformal simultaneously integrated boost (SIB) plans for three different tumors sites (i.e. bladder, breast, and brain). A total of 30 patients with brain, breast and bladder cancers were selected in this retrospective study. All the patients were CT simulated initially. The primary physician contoured PTV1 and PTV2 in the axial slices. The conventional doses prescribed for brain and breast is 60Gy/30 fractions, and 64.8Gy/36 fractions for bladder treatment. For the SIB plans biological effective doses (BED) were calculated for 25 fractions. The two conventional (Phase I and Phase II) and a single SIB plan for each patient were generated on Eclipse™ treatment planning system. Treatment plans were compared and analyzed for coverage index, conformity index, homogeneity index, dose gradient and organs at risk doses.In both plans 95% of PTV volume received a minimum of 95% of the prescribe dose. Dose deviation in the optic chiasm was found to be less than 0.5%. There is no significant difference in lung V20 and heart V30 in the breast plans. In the rectum plans V75%, V50% and V25% were found to be less than 1.2% different. Deviation in the tumor coverage, conformity and homogeneity indices were found to be less than 1%. SIB plans with three dimensional conformal radiotherapy technique reduce the overall treatment time without compromising the target coverage and without increasing dose to the organs at risk. The higher dose per fraction may increase the late effects to some extent. Further studies are required to evaluate the late effects with the intention of standardizing the SIB technique for practical implementation.Keywords: coverage index, conformity index, dose gradient, homogeneity index, simultaneously integrated boost
Procedia PDF Downloads 4767631 Viability of EBT3 Film in Small Dimensions to Be Use for in-Vivo Dosimetry in Radiation Therapy
Authors: Abdul Qadir Jangda, Khadija Mariam, Usman Ahmed, Sharib Ahmed
Abstract:
The Gafchromic EBT3 film has the characteristic of high spatial resolution, weak energy dependence and near tissue equivalence which makes them viable to be used for in-vivo dosimetry in External Beam and Brachytherapy applications. The aim of this study is to assess the smallest film dimension that may be feasible for the use in in-vivo dosimetry. To evaluate the viability, the film sizes from 3 x 3 mm to 20 x 20 mm were calibrated with 6 MV Photon and 6 MeV electron beams. The Gafchromic EBT3 (Lot no. A05151201, Make: ISP) film was cut into five different sizes in order to establish the relationship between absorbed dose vs. film dimensions. The film dimension were 3 x 3, 5 x 5, 10 x 10, 15 x 15, and 20 x 20 mm. The films were irradiated on Varian Clinac® 2100C linear accelerator for dose range from 0 to 1000 cGy using PTW solid water phantom. The irradiation was performed as per clinical absolute dose rate calibratin setup, i.e. 100 cm SAD, 5.0 cm depth and field size of 10x10 cm2 and 100 cm SSD, 1.4 cm depth and 15x15 cm2 applicator for photon and electron respectively. The irradiated films were scanned with the landscape orientation and a post development time of 48 hours (minimum). Film scanning accomplished using Epson Expression 10000 XL Flatbed Scanner and quantitative analysis carried out with ImageJ freeware software. Results show that the dose variation with different film dimension ranging from 3 x 3 mm to 20 x 20 mm is very minimal with a maximum standard deviation of 0.0058 in Optical Density for a dose level of 3000 cGy and the the standard deviation increases with the increase in dose level. So the precaution must be taken while using the small dimension films for higher doses. Analysis shows that there is insignificant variation in the absorbed dose with a change in film dimension of EBT3 film. Study concludes that the film dimension upto 3 x 3 mm can safely be used up to a dose level of 3000 cGy without the need of recalibration for particular dimension in use for dosimetric application. However, for higher dose levels, one may need to calibrate the films for a particular dimension in use for higher accuracy. It was also noticed that the crystalline structure of the film got damage at the edges while cutting the film, which can contribute to the wrong dose if the region of interest includes the damage area of the filmKeywords: external beam radiotherapy, film calibration, film dosimetery, in-vivo dosimetery
Procedia PDF Downloads 4947630 Comparison Serum Vitamin D by Geographic between the Highland and Lowland Schizophrenic Patient in the Sumatera Utara
Authors: Novita Linda Akbar, Elmeida Effendy, Mustafa M. Amin
Abstract:
Background: The most common of psychotic disorders is schizophrenia. Vitamin D is made from sunlight, and in the skin from UVB radiation from sunlight. If people with Vitamin D deficiency is common severe mental illness such as schizophrenia.Schizophrenia is a chronic mental illness characterised by positive symptoms and negatives symptoms, such as hallucinations and delusions, flat affect and lack of motivation we can found. In patients with Schizophrenia maybe have several environmental risk factors for schizophrenia, such as season of birth, latitude, and climate has been linked to vitamin D deficiency. There is also relationship between the risk of schizophrenia and latitude, and with an increased incidence rate of schizophrenia seen at a higher latitude. Methods: This study was an analytical study, conducted in BLUD RS Jiwa Propinsi Sumatera Utara and RSUD Deli Serdang, the period in May 2016 and ended in June 2016 with a sample of the study 60 sample (20 patients live in the Highland and Lowland, 20 healthy controls). Inclusion criteria were schizophrenic patients both men and women, aged between 18 to 60 years old, acute phase no agitation or abstinence antipsychotic drugs for two weeks, live in the Highland and Lowland, and willing to participate this study. Exclusion criteria were history of other psychotic disorders, comorbidities with other common medical condition, a history of substance abuse. Sample inspection for serum vitamin D using ELFA method. Statistical analysis using numeric comparative T-independent test. Results: The results showed that average levels of vitamin D for a group of subjects living in areas of high land was 227.6 ng / mL with a standard deviation of 86.78 ng / mL, the lowest levels of vitamin D is 138 ng / mL and the highest 482 ng / mL. In the group of subjects who settled in the low lands seem mean vitamin D levels higher than the mountainous area with an average 237.8 ng / mL with a standard deviation of 100.16 ng / mL. Vitamin D levels are lowest and the highest 138-585 ng / mL. Conclusion and Suggestion: The results of the analysis using the Mann Whitney test showed that there were no significant differences between the mean for the levels of vitamin D based on residence subject with a value of p = 0.652.Keywords: latitude, schizophrenia, Vitamin D, Sumatera Utara
Procedia PDF Downloads 2547629 Factors Influencing Milk Yield, Quality, and Revenue of Dairy Farms in Southern Vietnam
Authors: Ngoc-Hieu Vu
Abstract:
Dairy production in Vietnam is a relatively new agricultural activity and milk production increased remarkably in recent years. Smallholders are still the main drivers for this development, especially in the southern part of the country. However, information on the farming practices is very limited. Therefore, this study aimed to determine factors influencing milk yield and quality (milk fat, total solids, solids-not-fat, total number of bacteria, and somatic cell count) and revenue of dairy farms in Southern Vietnam. The collection of data was at the farm level; individual animal records were unavailable. The 539 studied farms were located in the provinces Lam Dong (N=111 farms), Binh Duong (N=69 farms), Long An (N=174 farms), and Ho Chi Minh city (N=185 farms). The dataset included 9221 monthly test-day records of the farms from January 2013 to May 2015. Seasons were defined as rainy and dry. Farms sizes were classified as small (< 10 milking cows), medium (10 to 19 milking cows) and large (≥ 20 milking cows). The model for each trait contained year-season and farm region-farm size as subclass fixed effects, and individual farm and residual as random effects. Results showed that year-season, region, and farm size were determining sources of variation affecting all studied traits. Milk yield was higher in dry than in rainy seasons (P < 0.05), while it tended to increase from years 2013 to 2015. Large farms had higher yields (445.6 kg/cow) than small (396.7 kg/cow) and medium (428.0 kg/cow) farms (P < 0.05). Small farms, in contrast, were superior to large farms in terms of milk fat, total solids, solids-not-fat, total number of bacteria, and somatic cell count than large farms (P < 0.05). Revenue per cow was higher in large compared with medium and small farms. In conclusion, large farms achieved higher milk yields and revenues per cow, while small farms were superior in milk quality. Overall, milk yields were low and better training, financial support and marketing opportunities for farmers are needed to improve dairy production and increase farm revenues in Southern Vietnam.Keywords: farm size, milk yield and quality, season, Southern Vietnam
Procedia PDF Downloads 3617628 An Analysis of Privacy and Security for Internet of Things Applications
Authors: Dhananjay Singh, M. Abdullah-Al-Wadud
Abstract:
The Internet of Things is a concept of a large scale ecosystem of wireless actuators. The actuators are defined as things in the IoT, those which contribute or produces some data to the ecosystem. However, ubiquitous data collection, data security, privacy preserving, large volume data processing, and intelligent analytics are some of the key challenges into the IoT technologies. In order to solve the security requirements, challenges and threats in the IoT, we have discussed a message authentication mechanism for IoT applications. Finally, we have discussed data encryption mechanism for messages authentication before propagating into IoT networks.Keywords: Internet of Things (IoT), message authentication, privacy, security
Procedia PDF Downloads 382