Search results for: sensor node data processing
26966 Photo Electrical Response in Graphene Based Resistive Sensor
Authors: H. C. Woo, F. Bouanis, C. S. Cojocaur
Abstract:
Graphene, which consists of a single layer of carbon atoms in a honeycomb lattice, is an interesting potential optoelectronic material because of graphene’s high carrier mobility, zero bandgap, and electron–hole symmetry. Graphene can absorb light and convert it into a photocurrent over a wide range of the electromagnetic spectrum, from the ultraviolet to visible and infrared regimes. Over the last several years, a variety of graphene-based photodetectors have been reported, such as graphene transistors, graphene-semiconductor heterojunction photodetectors, graphene based bolometers. It is also reported that there are several physical mechanisms enabling photodetection: photovoltaic effect, photo-thermoelectric effect, bolometric effect, photogating effect, and so on. In this work, we report a simple approach for the realization of graphene based resistive photo-detection devices and the measurements of their photoelectrical response. The graphene were synthesized directly on the glass substrate by novel growth method patented in our lab. Then, the metal electrodes were deposited by thermal evaporation on it, with an electrode length and width of 1.5 mm and 300 μm respectively, using Co to fabricate simple graphene based resistive photosensor. The measurements show that the graphene resistive devices exhibit a photoresponse to the illumination of visible light. The observed re-sistance response was reproducible and similar after many cycles of on and off operations. This photoelectrical response may be attributed not only to the direct photocurrent process but also to the desorption of oxygen. Our work shows that the simple graphene resistive devices have potential in photodetection applications.Keywords: graphene, resistive sensor, optoelectronics, photoresponse
Procedia PDF Downloads 28626965 Systematic Literature Review of Therapeutic Use of Autonomous Sensory Meridian Response (ASMR) and Short-Term ASMR Auditory Training Trial
Authors: Christine H. Cubelo
Abstract:
This study consists of 2-parts: a systematic review of current publications on the therapeutic use of autonomous sensory meridian response (ASMR) and a within-subjects auditory training trial using ASMR videos. The main intent is to explore ASMR as potentially therapeutically beneficial for those with atypical sensory processing. Many hearing-related disorders and mood or anxiety symptoms overlap with symptoms of sensory processing issues. For this reason, inclusion and exclusion criteria of the systematic review were generated in an effort to produce optimal search outcomes and avoid overly confined criteria that would limit yielded results. Criteria for inclusion in the review for Part 1 are (1) adult participants diagnosed with hearing loss or atypical sensory processing, (2) inclusion of measures related to ASMR as a treatment method, and (3) published between 2000 and 2022. A total of 1,088 publications were found in the preliminary search, and a total of 13 articles met the inclusion criteria. A total of 14 participants completed the trial and post-trial questionnaire. Of all responses, 64.29% agreed that the duration of auditory training sessions was reasonable. In addition, 71.43% agreed that the training improved their perception of music. Lastly, 64.29% agreed that the training improved their perception of a primary talker when there are other talkers or background noises present.Keywords: autonomous sensory meridian response, auditory training, atypical sensory processing, hearing loss, hearing aids
Procedia PDF Downloads 5526964 Adaptive Anchor Weighting for Improved Localization with Levenberg-Marquardt Optimization
Authors: Basak Can
Abstract:
This paper introduces an iterative and weighted localization method that utilizes a unique cost function formulation to significantly enhance the performance of positioning systems. The system employs locators, such as Gateways (GWs), to estimate and track the position of an End Node (EN). Performance is evaluated relative to the number of locators, with known locations determined through calibration. Performance evaluation is presented utilizing low cost single-antenna Bluetooth Low Energy (BLE) devices. The proposed approach can be applied to alternative Internet of Things (IoT) modulation schemes, as well as Ultra WideBand (UWB) or millimeter-wave (mmWave) based devices. In non-line-of-sight (NLOS) scenarios, using four or eight locators yields a 95th percentile localization performance of 2.2 meters and 1.5 meters, respectively, in a 4,305 square feet indoor area with BLE 5.1 devices. This method outperforms conventional RSSI-based techniques, achieving a 51% improvement with four locators and a 52 % improvement with eight locators. Future work involves modeling interference impact and implementing data curation across multiple channels to mitigate such effects.Keywords: lateration, least squares, Levenberg-Marquardt algorithm, localization, path-loss, RMS error, RSSI, sensors, shadow fading, weighted localization
Procedia PDF Downloads 2526963 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms
Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang
Abstract:
Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.Keywords: bioassay, machine learning, preprocessing, virtual screen
Procedia PDF Downloads 27426962 Asynchronous Low Duty Cycle Media Access Control Protocol for Body Area Wireless Sensor Networks
Authors: Yasin Ghasemi-Zadeh, Yousef Kavian
Abstract:
Wireless body area networks (WBANs) technology has achieved lots of popularity over the last decade with a wide range of medical applications. This paper presents an asynchronous media access control (MAC) protocol based on B-MAC protocol by giving an application for medical issues. In WBAN applications, there are some serious problems such as energy, latency, link reliability (quality of wireless link) and throughput which are mainly due to size of sensor networks and human body specifications. To overcome these problems and improving link reliability, we concentrated on MAC layer that supports mobility models for medical applications. In the presented protocol, preamble frames are divided into some sub-frames considering the threshold level. Actually, the main reason for creating shorter preambles is the link reliability where due to some reasons such as water, the body signals are affected on some frequency bands and causes fading and shadowing on signals, therefore by increasing the link reliability, these effects are reduced. In case of mobility model, we use MoBAN model and modify that for some more areas. The presented asynchronous MAC protocol is modeled by OMNeT++ simulator. The results demonstrate increasing the link reliability comparing to B-MAC protocol where the packet reception ratio (PRR) is 92% also covers more mobility areas than MoBAN protocol.Keywords: wireless body area networks (WBANs), MAC protocol, link reliability, mobility, biomedical
Procedia PDF Downloads 36926961 Robustness of MIMO-OFDM Schemes for Future Digital TV to Carrier Frequency Offset
Authors: D. Sankara Reddy, T. Kranthi Kumar, K. Sreevani
Abstract:
This paper investigates the impact of carrier frequency offset (CFO) on the performance of different MIMO-OFDM schemes with high spectral efficiency for next generation of terrestrial digital TV. We show that all studied MIMO-OFDM schemes are sensitive to CFO when it is greater than 1% of intercarrier spacing. We show also that the Alamouti scheme is the most sensitive MIMO scheme to CFO.Keywords: modulation and multiplexing (MIMO-OFDM), signal processing for transmission carrier frequency offset, future digital TV, imaging and signal processing
Procedia PDF Downloads 48726960 Iris Cancer Detection System Using Image Processing and Neural Classifier
Authors: Abdulkader Helwan
Abstract:
Iris cancer, so called intraocular melanoma is a cancer that starts in the iris; the colored part of the eye that surrounds the pupil. There is a need for an accurate and cost-effective iris cancer detection system since the available techniques used currently are still not efficient. The combination of the image processing and artificial neural networks has a great efficiency for the diagnosis and detection of the iris cancer. Image processing techniques improve the diagnosis of the cancer by enhancing the quality of the images, so the physicians diagnose properly. However, neural networks can help in making decision; whether the eye is cancerous or not. This paper aims to develop an intelligent system that stimulates a human visual detection of the intraocular melanoma, so called iris cancer. The suggested system combines both image processing techniques and neural networks. The images are first converted to grayscale, filtered, and then segmented using prewitt edge detection algorithm to detect the iris, sclera circles and the cancer. The principal component analysis is used to reduce the image size and for extracting features. Those features are considered then as inputs for a neural network which is capable of deciding if the eye is cancerous or not, throughout its experience adopted by many training iterations of different normal and abnormal eye images during the training phase. Normal images are obtained from a public database available on the internet, “Mile Research”, while the abnormal ones are obtained from another database which is the “eyecancer”. The experimental results for the proposed system show high accuracy 100% for detecting cancer and making the right decision.Keywords: iris cancer, intraocular melanoma, cancerous, prewitt edge detection algorithm, sclera
Procedia PDF Downloads 50326959 Waste Derived from Refinery and Petrochemical Plants Activities: Processing of Oil Sludge through Thermal Desorption
Authors: Anna Bohers, Emília Hroncová, Juraj Ladomerský
Abstract:
Oil sludge with its main characteristic of high acidity is a waste product generated from the operation of refinery and petrochemical plants. Former refinery and petrochemical plant - Petrochema Dubová is present in Slovakia as well. Its activities was to process the crude oil through sulfonation and adsorption technology for production of lubricating and special oils, synthetic detergents and special white oils for cosmetic and medical purposes. Seventy years ago – period, when this historical acid sludge burden has been created – comparing to the environmental awareness the production was in preference. That is the reason why, as in many countries, also in Slovakia a historical environmental burden is present until now – 229 211 m3 of oil sludge in the middle of the National Park of Nízke Tatry mountain chain. Neither one of tried treatment methods – bio or non-biologic one - was proved as suitable for processing or for recovery in the reason of different factors admission: i.e. strong aggressivity, difficulty with handling because of its sludgy and liquid state et sim. As a potential solution, also incineration was tested, but it was not proven as a suitable method, as the concentration of SO2 in combustion gases was too high, and it was not possible to decrease it under the acceptable value of 2000 mg.mn-3. That is the reason why the operation of incineration plant has been terminated, and the acid sludge landfills are present until nowadays. The objective of this paper is to present a new possibility of processing and valorization of acid sludgy-waste. The processing of oil sludge was performed through the effective separation - thermal desorption technology, through which it is possible to split the sludgy material into the matrix (soil, sediments) and organic contaminants. In order to boost the efficiency in the processing of acid sludge through thermal desorption, the work will present the possibility of application of an original technology – Method of Blowing Decomposition for recovering of organic matter into technological lubricating oil.Keywords: hazardous waste, oil sludge, remediation, thermal desorption
Procedia PDF Downloads 20026958 AI Software Algorithms for Drivers Monitoring within Vehicles Traffic - SiaMOTO
Authors: Ioan Corneliu Salisteanu, Valentin Dogaru Ulieru, Mihaita Nicolae Ardeleanu, Alin Pohoata, Bogdan Salisteanu, Stefan Broscareanu
Abstract:
Creating a personalized statistic for an individual within the population using IT systems, based on the searches and intercepted spheres of interest they manifest, is just one 'atom' of the artificial intelligence analysis network. However, having the ability to generate statistics based on individual data intercepted from large demographic areas leads to reasoning like that issued by a human mind with global strategic ambitions. The DiaMOTO device is a technical sensory system that allows the interception of car events caused by a driver, positioning them in time and space. The device's connection to the vehicle allows the creation of a source of data whose analysis can create psychological, behavioural profiles of the drivers involved. The SiaMOTO system collects data from many vehicles equipped with DiaMOTO, driven by many different drivers with a unique fingerprint in their approach to driving. In this paper, we aimed to explain the software infrastructure of the SiaMOTO system, a system designed to monitor and improve driver driving behaviour, as well as the criteria and algorithms underlying the intelligent analysis process.Keywords: artificial intelligence, data processing, driver behaviour, driver monitoring, SiaMOTO
Procedia PDF Downloads 9126957 Image Segmentation Techniques: Review
Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo
Abstract:
Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.Keywords: clustering-based, convolution-network, edge-based, region-growing
Procedia PDF Downloads 9726956 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques
Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña
Abstract:
The automatic detection of indigenous languages in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages
Procedia PDF Downloads 1626955 A Stepwise Approach for Piezoresistive Microcantilever Biosensor Optimization
Authors: Amal E. Ahmed, Levent Trabzon
Abstract:
Due to the low concentration of the analytes in biological samples, the use of Biological Microelectromechanical System (Bio-MEMS) biosensors for biomolecules detection results in a minuscule output signal that is not good enough for practical applications. In response to this, a need has arisen for an optimized biosensor capable of giving high output signal in response the detection of few analytes in the sample; the ultimate goal is being able to convert the attachment of a single biomolecule into a measurable quantity. For this purpose, MEMS microcantilevers based biosensors emerged as a promising sensing solution because it is simple, cheap, very sensitive and more importantly does not need analytes optical labeling (Label-free). Among the different microcantilever transducing techniques, piezoresistive based microcantilever biosensors became more prominent because it works well in liquid environments and has an integrated readout system. However, the design of piezoresistive microcantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. It was found that the parameters that can be optimized to enhance the sensitivity of Piezoresistive microcantilever-based sensors are: cantilever dimensions, cantilever material, cantilever shape, piezoresistor material, piezoresistor doping level, piezoresistor dimensions, piezoresistor position, Stress Concentration Region's (SCR) shape and position. After a systematic analyzation of the effect of each design and process parameters on the sensitivity, a step-wise optimization approach was developed in which almost all these parameters were variated one at each step while fixing the others to get the maximum possible sensitivity at the end. At each step, the goal was to optimize the parameter in a way that it maximizes and concentrates the stress in the piezoresistor region for the same applied force thus get the higher sensitivity. Using this approach, an optimized sensor that has 73.5x times higher electrical sensitivity (ΔR⁄R) than the starting sensor was obtained. In addition to that, this piezoresistive microcantilever biosensor it is more sensitive than the other similar sensors previously reported in the open literature. The mechanical sensitivity of the final senior is -1.5×10-8 Ω/Ω ⁄pN; which means that for each 1pN (10-10 g) biomolecules attach to this biosensor; the piezoresistor resistivity will decrease by 1.5×10-8 Ω. Throughout this work COMSOL Multiphysics 5.0, a commercial Finite Element Analysis (FEA) tool, has been used to simulate the sensor performance.Keywords: biosensor, microcantilever, piezoresistive, stress concentration region (SCR)
Procedia PDF Downloads 57126954 Assessing of Social Comfort of the Russian Population with Big Data
Authors: Marina Shakleina, Konstantin Shaklein, Stanislav Yakiro
Abstract:
The digitalization of modern human life over the last decade has facilitated the acquisition, storage, and processing of data, which are used to detect changes in consumer preferences and to improve the internal efficiency of the production process. This emerging trend has attracted academic interest in the use of big data in research. The study focuses on modeling the social comfort of the Russian population for the period 2010-2021 using big data. Big data provides enormous opportunities for understanding human interactions at the scale of society with plenty of space and time dynamics. One of the most popular big data sources is Google Trends. The methodology for assessing social comfort using big data involves several steps: 1. 574 words were selected based on the Harvard IV-4 Dictionary adjusted to fit the reality of everyday Russian life. The set of keywords was further cleansed by excluding queries consisting of verbs and words with several lexical meanings. 2. Search queries were processed to ensure comparability of results: the transformation of data to a 10-point scale, elimination of popularity peaks, detrending, and deseasoning. The proposed methodology for keyword search and Google Trends processing was implemented in the form of a script in the Python programming language. 3. Block and summary integral indicators of social comfort were constructed using the first modified principal component resulting in weighting coefficients values of block components. According to the study, social comfort is described by 12 blocks: ‘health’, ‘education’, ‘social support’, ‘financial situation’, ‘employment’, ‘housing’, ‘ethical norms’, ‘security’, ‘political stability’, ‘leisure’, ‘environment’, ‘infrastructure’. According to the model, the summary integral indicator increased by 54% and was 4.631 points; the average annual rate was 3.6%, which is higher than the rate of economic growth by 2.7 p.p. The value of the indicator describing social comfort in Russia is determined by 26% by ‘social support’, 24% by ‘education’, 12% by ‘infrastructure’, 10% by ‘leisure’, and the remaining 28% by others. Among 25% of the most popular searches, 85% are of negative nature and are mainly related to the blocks ‘security’, ‘political stability’, ‘health’, for example, ‘crime rate’, ‘vulnerability’. Among the 25% most unpopular queries, 99% of the queries were positive and mostly related to the blocks ‘ethical norms’, ‘education’, ‘employment’, for example, ‘social package’, ‘recycling’. In conclusion, the introduction of the latent category ‘social comfort’ into the scientific vocabulary deepens the theory of the quality of life of the population in terms of the study of the involvement of an individual in the society and expanding the subjective aspect of the measurements of various indicators. Integral assessment of social comfort demonstrates the overall picture of the development of the phenomenon over time and space and quantitatively evaluates ongoing socio-economic policy. The application of big data in the assessment of latent categories gives stable results, which opens up possibilities for their practical implementation.Keywords: big data, Google trends, integral indicator, social comfort
Procedia PDF Downloads 20026953 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics
Procedia PDF Downloads 13326952 Using Non-Negative Matrix Factorization Based on Satellite Imagery for the Collection of Agricultural Statistics
Authors: Benyelles Zakaria, Yousfi Djaafar, Karoui Moussa Sofiane
Abstract:
Agriculture is fundamental and remains an important objective in the Algerian economy, based on traditional techniques and structures, it generally has a purpose of consumption. Collection of agricultural statistics in Algeria is done using traditional methods, which consists of investigating the use of land through survey and field survey. These statistics suffer from problems such as poor data quality, the long delay between collection of their last final availability and high cost compared to their limited use. The objective of this work is to develop a processing chain for a reliable inventory of agricultural land by trying to develop and implement a new method of extracting information. Indeed, this methodology allowed us to combine data from remote sensing and field data to collect statistics on areas of different land. The contribution of remote sensing in the improvement of agricultural statistics, in terms of area, has been studied in the wilaya of Sidi Bel Abbes. It is in this context that we applied a method for extracting information from satellite images. This method is called the non-negative matrix factorization, which does not consider the pixel as a single entity, but will look for components the pixel itself. The results obtained by the application of the MNF were compared with field data and the results obtained by the method of maximum likelihood. We have seen a rapprochement between the most important results of the FMN and those of field data. We believe that this method of extracting information from satellite data leads to interesting results of different types of land uses.Keywords: blind source separation, hyper-spectral image, non-negative matrix factorization, remote sensing
Procedia PDF Downloads 42326951 Compact Optical Sensors for Harsh Environments
Authors: Branislav Timotijevic, Yves Petremand, Markus Luetzelschwab, Dara Bayat, Laurent Aebi
Abstract:
Optical miniaturized sensors with remote readout are required devices for the monitoring in harsh electromagnetic environments. As an example, in turbo and hydro generators, excessively high vibrations of the end-windings can lead to dramatic damages, imposing very high, additional service costs. A significant change of the generator temperature can also be an indicator of the system failure. Continuous monitoring of vibrations, temperature, humidity, and gases is therefore mandatory. The high electromagnetic fields in the generators impose the use of non-conductive devices in order to prevent electromagnetic interferences and to electrically isolate the sensing element to the electronic readout. Metal-free sensors are good candidates for such systems since they are immune to very strong electromagnetic fields and given the fact that they are non-conductive. We have realized miniature optical accelerometer and temperature sensors for a remote sensing of the harsh environments using the common, inexpensive silicon Micro Electro-Mechanical System (MEMS) platform. Both devices show highly linear response. The accelerometer has a deviation within 1% from the linear fit when tested in a range 0 – 40 g. The temperature sensor can provide the measurement accuracy better than 1 °C in a range 20 – 150 °C. The design of other type of sensors for the environments with high electromagnetic interferences has also been discussed.Keywords: optical MEMS, temperature sensor, accelerometer, remote sensing, harsh environment
Procedia PDF Downloads 36726950 Thermoelectric Blanket for Aiding the Treatment of Cerebral Hypoxia and Other Related Conditions
Authors: Sarayu Vanga, Jorge Galeano-Cabral, Kaya Wei
Abstract:
Cerebral hypoxia refers to a condition in which there is a decrease in oxygen supply to the brain. Patients suffering from this condition experience a decrease in their body temperature. While there isn't any cure to treat cerebral hypoxia as of date, certain procedures are utilized to help aid in the treatment of the condition. Regulating the body temperature is an example of one of those procedures. Hypoxia is well known to reduce the body temperature of mammals, although the neural origins of this response remain uncertain. In order to speed recovery from this condition, it is necessary to maintain a stable body temperature. In this study, we present an approach to regulating body temperature for patients who suffer from cerebral hypoxia or other similar conditions. After a thorough literature study, we propose the use of thermoelectric blankets, which are temperature-controlled thermal blankets based on thermoelectric devices. These blankets are capable of heating up and cooling down the patient to stabilize body temperature. This feature is possible through the reversible effect that thermoelectric devices offer while behaving as a thermal sensor, and it is an effective way to stabilize temperature. Thermoelectricity is the direct conversion of thermal to electrical energy and vice versa. This effect is now known as the Seebeck effect, and it is characterized by the Seebeck coefficient. In such a configuration, the device has cooling and heating sides with temperatures that can be interchanged by simply switching the direction of the current input in the system. This design integrates various aspects, including a humidifier, ventilation machine, IV-administered medication, air conditioning, circulation device, and a body temperature regulation system. The proposed design includes thermocouples that will trigger the blanket to increase or decrease a set temperature through a medical temperature sensor. Additionally, the proposed design allows an efficient way to control fluctuations in body temperature while being cost-friendly, with an expected cost of 150 dollars. We are currently working on developing a prototype of the design to collect thermal and electrical data under different conditions and also intend to perform an optimization analysis to improve the design even further. While this proposal was developed for treating cerebral hypoxia, it can also aid in the treatment of other related conditions, as fluctuations in body temperature appear to be a common symptom that patients have for many illnesses.Keywords: body temperature regulation, cerebral hypoxia, thermoelectric, blanket design
Procedia PDF Downloads 16026949 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 24826948 Controlling Drone Flight Missions through Natural Language Processors Using Artificial Intelligence
Authors: Sylvester Akpah, Selasi Vondee
Abstract:
Unmanned Aerial Vehicles (UAV) as they are also known, drones have attracted increasing attention in recent years due to their ubiquitous nature and boundless applications in the areas of communication, surveying, aerial photography, weather forecasting, medical delivery, surveillance amongst others. Operated remotely in real-time or pre-programmed, drones can fly autonomously or on pre-defined routes. The application of these aerial vehicles has successfully penetrated the world due to technological evolution, thus a lot more businesses are utilizing their capabilities. Unfortunately, while drones are replete with the benefits stated supra, they are riddled with some problems, mainly attributed to the complexities in learning how to master drone flights, collision avoidance and enterprise security. Additional challenges, such as the analysis of flight data recorded by sensors attached to the drone may take time and require expert help to analyse and understand. This paper presents an autonomous drone control system using a chatbot. The system allows for easy control of drones using conversations with the aid of Natural Language Processing, thus to reduce the workload needed to set up, deploy, control, and monitor drone flight missions. The results obtained at the end of the study revealed that the drone connected to the chatbot was able to initiate flight missions with just text and voice commands, enable conversation and give real-time feedback from data and requests made to the chatbot. The results further revealed that the system was able to process natural language and produced human-like conversational abilities using Artificial Intelligence (Natural Language Understanding). It is recommended that radio signal adapters be used instead of wireless connections thus to increase the range of communication with the aerial vehicle.Keywords: artificial ntelligence, chatbot, natural language processing, unmanned aerial vehicle
Procedia PDF Downloads 14226947 Landslide Hazard Zonation Using Satellite Remote Sensing and GIS Technology
Authors: Ankit Tyagi, Reet Kamal Tiwari, Naveen James
Abstract:
Landslide is the major geo-environmental problem of Himalaya because of high ridges, steep slopes, deep valleys, and complex system of streams. They are mainly triggered by rainfall and earthquake and causing severe damage to life and property. In Uttarakhand, the Tehri reservoir rim area, which is situated in the lesser Himalaya of Garhwal hills, was selected for landslide hazard zonation (LHZ). The study utilized different types of data, including geological maps, topographic maps from the survey of India, Landsat 8, and Cartosat DEM data. This paper presents the use of a weighted overlay method in LHZ using fourteen causative factors. The various data layers generated and co-registered were slope, aspect, relative relief, soil cover, intensity of rainfall, seismic ground shaking, seismic amplification at surface level, lithology, land use/land cover (LULC), normalized difference vegetation index (NDVI), topographic wetness index (TWI), stream power index (SPI), drainage buffer and reservoir buffer. Seismic analysis is performed using peak horizontal acceleration (PHA) intensity and amplification factors in the evaluation of the landslide hazard index (LHI). Several digital image processing techniques such as topographic correction, NDVI, and supervised classification were widely used in the process of terrain factor extraction. Lithological features, LULC, drainage pattern, lineaments, and structural features are extracted using digital image processing techniques. Colour, tones, topography, and stream drainage pattern from the imageries are used to analyse geological features. Slope map, aspect map, relative relief are created by using Cartosat DEM data. DEM data is also used for the detailed drainage analysis, which includes TWI, SPI, drainage buffer, and reservoir buffer. In the weighted overlay method, the comparative importance of several causative factors obtained from experience. In this method, after multiplying the influence factor with the corresponding rating of a particular class, it is reclassified, and the LHZ map is prepared. Further, based on the land-use map developed from remote sensing images, a landslide vulnerability study for the study area is carried out and presented in this paper.Keywords: weighted overlay method, GIS, landslide hazard zonation, remote sensing
Procedia PDF Downloads 13326946 How to Use Big Data in Logistics Issues
Authors: Mehmet Akif Aslan, Mehmet Simsek, Eyup Sensoy
Abstract:
Big Data stands for today’s cutting-edge technology. As the technology becomes widespread, so does Data. Utilizing massive data sets enable companies to get competitive advantages over their adversaries. Out of many area of Big Data usage, logistics has significance role in both commercial sector and military. This paper lays out what big data is and how it is used in both military and commercial logistics.Keywords: big data, logistics, operational efficiency, risk management
Procedia PDF Downloads 64126945 Detecting Nitrogen Deficiency and Potato Leafhopper (Hemiptera, Cicadellidae) Infestation in Green Bean Using Multispectral Imagery from Unmanned Aerial Vehicle
Authors: Bivek Bhusal, Ana Legrand
Abstract:
Detection of crop stress is one of the major applications of remote sensing in agriculture. Multiple studies have demonstrated the capability of remote sensing using Unmanned Aerial Vehicle (UAV)-based multispectral imagery for detection of plant stress, but none so far on Nitrogen (N) stress and PLH feeding stress on green beans. In view of its wide host range, geographical distribution, and damage potential, Potato leafhopper- Empoasca fabae (Harris) has been emerging as a key pest in several countries. Monitoring methods for potato leafhopper (PLH) damage, as well as the laboratory techniques for detecting Nitrogen deficiency, are time-consuming and not always easily affordable. A study was initiated to demonstrate if the multispectral sensor attached to a drone can detect PLH stress and N deficiency in beans. Small-plot trials were conducted in the summer of 2023, where cages were used to manipulate PLH infestation in green beans (Provider cultivar) at their first-trifoliate stage. Half of the bean plots were introduced with PLH, and the others were kept insect-free. Half of these plots were grown with the recommended amount of N, and the others were grown without N. Canopy reflectance was captured using a five-band multispectral sensor. Our findings indicate that drone imagery could detect stress due to a lack of N and PLH damage in beans.Keywords: potato leafhopper, nitrogen, remote sensing, spectral reflectance, beans
Procedia PDF Downloads 6026944 Analysis of Airborne Data Using Range Migration Algorithm for the Spotlight Mode of Synthetic Aperture Radar
Authors: Peter Joseph Basil Morris, Chhabi Nigam, S. Ramakrishnan, P. Radhakrishna
Abstract:
This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data using the Range Migration Algorithm (RMA) for the spotlight mode of operation. Unlike in polar format algorithm (PFA), space-variant defocusing and geometric distortion effects are mitigated in RMA since it does not assume that the illuminating wave-fronts are planar. This facilitates the use of RMA for imaging scenarios involving severe differential range curvatures enabling the imaging of larger scenes at fine resolution and at shorter ranges with low center frequencies. The RMA algorithm for the spotlight mode of SAR is analyzed in this paper using the airborne data. Pre-processing operations viz: - range de-skew and motion compensation to a line are performed on the raw data before being fed to the RMA component. Various stages of the RMA viz:- 2D Matched Filtering, Along Track Fourier Transform and Slot Interpolation are analyzed to find the performance limits and the dependence of the imaging geometry on the resolution of the final image. The ability of RMA to compensate for severe differential range curvatures in the two-dimensional spatial frequency domain are also illustrated in this paper.Keywords: range migration algorithm, spotlight SAR, synthetic aperture radar, matched filtering, slot interpolation
Procedia PDF Downloads 24126943 Machine Learning Based Smart Beehive Monitoring System Without Internet
Authors: Esra Ece Var
Abstract:
Beekeeping plays essential role both in terms of agricultural yields and agricultural economy; they produce honey, wax, royal jelly, apitoxin, pollen, and propolis. Nowadays, these natural products become more importantly suitable and preferable for nutrition, food supplement, medicine, and industry. However, to produce organic honey, majority of the apiaries are located in remote or distant rural areas where utilities such as electricity and Internet network are not available. Additionally, due to colony failures, world honey production decreases year by year despite the increase in the number of beehives. The objective of this paper is to develop a smart beehive monitoring system for apiaries including those that do not have access to Internet network. In this context, temperature and humidity inside the beehive, and ambient temperature were measured with RFID sensors. Control center, where all sensor data was sent and stored at, has a GSM module used to warn the beekeeper via SMS when an anomaly is detected. Simultaneously, using the collected data, an unsupervised machine learning algorithm is used for detecting anomalies and calibrating the warning system. The results show that the smart beehive monitoring system can detect fatal anomalies up to 4 weeks prior to colony loss.Keywords: beekeeping, smart systems, machine learning, anomaly detection, apiculture
Procedia PDF Downloads 23926942 Road Condition Monitoring Using Built-in Vehicle Technology Data, Drones, and Deep Learning
Authors: Judith Mwakalonge, Geophrey Mbatta, Saidi Siuhi, Gurcan Comert, Cuthbert Ruseruka
Abstract:
Transportation agencies worldwide continuously monitor their roads' conditions to minimize road maintenance costs and maintain public safety and rideability quality. Existing methods for carrying out road condition surveys involve manual observations of roads using standard survey forms done by qualified road condition surveyors or engineers either on foot or by vehicle. Automated road condition survey vehicles exist; however, they are very expensive since they require special vehicles equipped with sensors for data collection together with data processing and computing devices. The manual methods are expensive, time-consuming, infrequent, and can hardly provide real-time information for road conditions. This study contributes to this arena by utilizing built-in vehicle technologies, drones, and deep learning to automate road condition surveys while using low-cost technology. A single model is trained to capture flexible pavement distresses (Potholes, Rutting, Cracking, and raveling), thereby providing a more cost-effective and efficient road condition monitoring approach that can also provide real-time road conditions. Additionally, data fusion is employed to enhance the road condition assessment with data from vehicles and drones.Keywords: road conditions, built-in vehicle technology, deep learning, drones
Procedia PDF Downloads 12426941 Development of Concurrent Engineering through the Application of Software Simulations of Metal Production Processing and Analysis of the Effects of Application
Authors: D. M. Eric, D. Milosevic, F. D. Eric
Abstract:
Concurrent engineering technologies are a modern concept in manufacturing engineering. One of the key goals in designing modern technological processes is further reduction of production costs, both in the prototype and the preparatory part, as well as during the serial production. Thanks to many segments of concurrent engineering, these goals can be accomplished much more easily. In this paper, we give an overview of the advantages of using modern software simulations in relation to the classical aspects of designing technological processes of metal deformation. Significant savings are achieved thanks to the electronic simulation and software detection of all possible irregularities in the functional-working regime of the technological process. In order for the expected results to be optimal, it is necessary that the input parameters are very objective and that they reliably represent the values of these parameters in real conditions. Since it is a metal deformation treatment here, the particularly important parameters are the coefficient of internal friction between the working material and the tools, as well as the parameters related to the flow curve of the processing material. The paper will give a presentation for the experimental determination of some of these parameters.Keywords: production technologies, metal processing, software simulations, effects of application
Procedia PDF Downloads 23526940 Performing Diagnosis in Building with Partially Valid Heterogeneous Tests
Authors: Houda Najeh, Mahendra Pratap Singh, Stéphane Ploix, Antoine Caucheteux, Karim Chabir, Mohamed Naceur Abdelkrim
Abstract:
Building system is highly vulnerable to different kinds of faults and human misbehaviors. Energy efficiency and user comfort are directly targeted due to abnormalities in building operation. The available fault diagnosis tools and methodologies particularly rely on rules or pure model-based approaches. It is assumed that model or rule-based test could be applied to any situation without taking into account actual testing contexts. Contextual tests with validity domain could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when validate the test model considering the non-modeled events such as occupancy, weather conditions, door and window openings and the integration of the knowledge of the expert on the state of the system. The concept of heterogeneous tests is combined with test validity to generate fault diagnoses. A combination of rules, range and model-based tests known as heterogeneous tests are proposed to reduce the modeling complexity. Calculation of logical diagnoses coming from artificial intelligence provides a global explanation consistent with the test result. An application example shows the efficiency of the proposed technique: an office setting at Grenoble Institute of Technology.Keywords: heterogeneous tests, validity, building system, sensor grids, sensor fault, diagnosis, fault detection and isolation
Procedia PDF Downloads 29426939 Unlocking the Potential of Short Texts with Semantic Enrichment, Disambiguation Techniques, and Context Fusion
Authors: Mouheb Mehdoui, Amel Fraisse, Mounir Zrigui
Abstract:
This paper explores the potential of short texts through semantic enrichment and disambiguation techniques. By employing context fusion, we aim to enhance the comprehension and utility of concise textual information. The methodologies utilized are grounded in recent advancements in natural language processing, which allow for a deeper understanding of semantics within limited text formats. Specifically, topic classification is employed to understand the context of the sentence and assess the relevance of added expressions. Additionally, word sense disambiguation is used to clarify unclear words, replacing them with more precise terms. The implications of this research extend to various applications, including information retrieval and knowledge representation. Ultimately, this work highlights the importance of refining short text processing techniques to unlock their full potential in real-world applications.Keywords: information traffic, text summarization, word-sense disambiguation, semantic enrichment, ambiguity resolution, short text enhancement, information retrieval, contextual understanding, natural language processing, ambiguity
Procedia PDF Downloads 926938 Plasma Technology for Hazardous Biomedical Waste Treatment
Authors: V. E. Messerle, A. L. Mosse, O. A. Lavrichshev, A. N. Nikonchuk, A. B. Ustimenko
Abstract:
One of the most serious environmental problems today is pollution by biomedical waste (BMW), which in most cases has undesirable properties such as toxicity, carcinogenicity, mutagenicity, fire. Sanitary and hygienic survey of typical solid BMW, made in Belarus, Kazakhstan, Russia and other countries shows that their risk to the environment is significantly higher than that of most chemical wastes. Utilization of toxic BMW requires use of the most universal methods to ensure disinfection and disposal of any of their components. Such technology is a plasma technology of BMW processing. To implement this technology a thermodynamic analysis of the plasma processing of BMW was fulfilled and plasma-box furnace was developed. The studies have been conducted on the example of the processing of bone. To perform thermodynamic calculations software package Terra was used. Calculations were carried out in the temperature range 300 - 3000 K and a pressure of 0.1 MPa. It is shown that the final products do not contain toxic substances. From the organic mass of BMW synthesis gas containing combustible components 77.4-84.6% was basically produced, and mineral part consists mainly of calcium oxide and contains no carbon. Degree of gasification of carbon reaches 100% by the temperature 1250 K. Specific power consumption for BMW processing increases with the temperature throughout its range and reaches 1 kWh/kg. To realize plasma processing of BMW experimental installation with DC plasma torch of 30 kW power was developed. The experiments allowed verifying the thermodynamic calculations. Wastes are packed in boxes weighing 5-7 kg. They are placed in the box furnace. Under the influence of air plasma flame average temperature in the box reaches 1800 OC, the organic part of the waste is gasified and inorganic part of the waste is melted. The resulting synthesis gas is continuously withdrawn from the unit through the cooling and cleaning system. Molten mineral part of the waste is removed from the furnace after it has been stopped. Experimental studies allowed determining operating modes of the plasma box furnace, the exhaust gases was analyzed, samples of condensed products were assembled and their chemical composition was determined. Gas at the outlet of the plasma box furnace has the following composition (vol.%): CO - 63.4, H2 - 6.2, N2 - 29.6, S - 0.8. The total concentration of synthesis gas (CO + H2) is 69.6%, which agrees well with the thermodynamic calculation. Experiments confirmed absence of the toxic substances in the final products.Keywords: biomedical waste, box furnace, plasma torch, processing, synthesis gas
Procedia PDF Downloads 52526937 Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV
Authors: Maria Pavlova
Abstract:
In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems.Keywords: camera, object recognition, OpenCV, Raspberry
Procedia PDF Downloads 218