Search results for: laser processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4377

Search results for: laser processing

2007 Prediction of Vapor Liquid Equilibrium for Dilute Solutions of Components in Ionic Liquid by Neural Networks

Authors: S. Mousavian, A. Abedianpour, A. Khanmohammadi, S. Hematian, Gh. Eidi Veisi

Abstract:

Ionic liquids are finding a wide range of applications from reaction media to separations and materials processing. In these applications, Vapor–Liquid equilibrium (VLE) is the most important one. VLE for six systems at 353 K and activity coefficients at infinite dilution 〖(γ〗_i^∞) for various solutes (alkanes, alkenes, cycloalkanes, cycloalkenes, aromatics, alcohols, ketones, esters, ethers, and water) in the ionic liquids (1-ethyl-3-methylimidazolium bis (trifluoromethylsulfonyl)imide [EMIM][BTI], 1-hexyl-3-methyl imidazolium bis (trifluoromethylsulfonyl) imide [HMIM][BTI], 1-octyl-3-methylimidazolium bis(trifluoromethylsulfonyl) imide [OMIM][BTI], and 1-butyl-1-methylpyrrolidinium bis (trifluoromethylsulfonyl) imide [BMPYR][BTI]) have been used to train neural networks in the temperature range from (303 to 333) K. Densities of the ionic liquids, Hildebrant constant of substances, and temperature were selected as input of neural networks. The networks with different hidden layers were examined. Networks with seven neurons in one hidden layer have minimum error and good agreement with experimental data.

Keywords: ionic liquid, neural networks, VLE, dilute solution

Procedia PDF Downloads 285
2006 Using the M-Learning to Support Learning of the Concept of the Derivative

Authors: Elena F. Ruiz, Marina Vicario, Chadwick Carreto, Rubén Peredo

Abstract:

One of the main obstacles in Mexico’s engineering programs is math comprehension, especially in the Derivative concept. Due to this, we present a study case that relates Mobile Computing and Classroom Learning in the “Escuela Superior de Cómputo”, based on the Educational model of the Instituto Politécnico Nacional (competence based work and problem solutions) in which we propose apps and activities to teach the concept of the Derivative. M- Learning is emphasized as one of its lines, as the objective is the use of mobile devices running an app that uses its components such as sensors, screen, camera and processing power in classroom work. In this paper, we employed Augmented Reality (ARRoC), based on the good results this technology has had in the field of learning. This proposal was developed using a qualitative research methodology supported by quantitative research. The methodological instruments used on this proposal are: observation, questionnaires, interviews and evaluations. We obtained positive results with a 40% increase using M-Learning, from the 20% increase using traditional means.

Keywords: augmented reality, classroom learning, educational research, mobile computing

Procedia PDF Downloads 354
2005 Review of the Software Used for 3D Volumetric Reconstruction of the Liver

Authors: P. Strakos, M. Jaros, T. Karasek, T. Kozubek, P. Vavra, T. Jonszta

Abstract:

In medical imaging, segmentation of different areas of human body like bones, organs, tissues, etc. is an important issue. Image segmentation allows isolating the object of interest for further processing that can lead for example to 3D model reconstruction of whole organs. Difficulty of this procedure varies from trivial for bones to quite difficult for organs like liver. The liver is being considered as one of the most difficult human body organ to segment. It is mainly for its complexity, shape versatility and proximity of other organs and tissues. Due to this facts usually substantial user effort has to be applied to obtain satisfactory results of the image segmentation. Process of image segmentation then deteriorates from automatic or semi-automatic to fairly manual one. In this paper, overview of selected available software applications that can handle semi-automatic image segmentation with further 3D volume reconstruction of human liver is presented. The applications are being evaluated based on the segmentation results of several consecutive DICOM images covering the abdominal area of the human body.

Keywords: image segmentation, semi-automatic, software, 3D volumetric reconstruction

Procedia PDF Downloads 280
2004 The Effect of Technology on Skin Development and Progress

Authors: Haidy Weliam Megaly Gouda

Abstract:

Dermatology is often a neglected specialty in low-resource settings despite the high morbidity associated with skin disease. This becomes even more significant when associated with HIV infection, as dermatological conditions are more common and aggressive in HIV-positive patients. African countries have the highest HIV infection rates, and skin conditions are frequently misdiagnosed and mismanaged because of a lack of dermatological training and educational material. The frequent lack of diagnostic tests in the African setting renders basic clinical skills all the more vital. This project aimed to improve the diagnosis and treatment of skin disease in the HIV population in a district hospital in Malawi. A basic dermatological clinical tool was developed and produced in collaboration with local staff and based on available literature and data collected from clinics. The aim was to improve diagnostic accuracy and provide guidance for the treatment of skin disease in HIV-positive patients. A literature search within Embassy, Medline and Google Scholar was performed and supplemented through data obtained from attending 5 Antiretroviral clinics. From the literature, conditions were selected for inclusion in the resource if they were described as specific, more prevalent, or extensive in the HIV population or have more adverse outcomes if they develop in HIV patients. Resource-appropriate treatment options were decided using Malawian Ministry of Health guidelines and textbooks specific to African dermatology. After the collection of data and discussion with local clinical and pharmacy staff, a list of 15 skin conditions was included, and a booklet was created using the simple layout of a picture, a diagnostic description of the disease and treatment options. Clinical photographs were collected from local clinics (with full consent of the patient) or from the book ‘Common Skin Diseases in Africa’ (permission granted if fully acknowledged and used in a not-for-profit capacity). This tool was evaluated by the local staff alongside an educational teaching session on skin disease. This project aimed to reduce uncertainty in diagnosis and provide guidance for appropriate treatment in HIV patients by gathering information into one practical and manageable resource. To further this project, we hope to review the effectiveness of the tool in practice.

Keywords: prevalence and pattern of skin diseases, impact on quality of life, rural Nepal, interventions, quality switched ruby laser, skin color river blindness, clinical signs, circularity index, grey level run length matrix, grey level co-occurrence matrix, local binary pattern, object detection, ring detection, shape identification

Procedia PDF Downloads 44
2003 Effect of Kenaf Fibres on Starch-Grafted-Polypropylene Biopolymer Properties

Authors: Amel Hamma, Allesandro Pegoretti

Abstract:

Kenaf fibres, with two aspect ratios, were melt compounded with two types of biopolymers named starch grafted polypropylene, and then blends compression molded to form plates of 1 mm thick. Results showed that processing induced variation of fibres length which is quantified by optical microscopy observations. Young modulus, stress at break and impact resistance values of starch-grafted-polypropylenes were remarkably improved by kenaf fibres for both matrixes and demonstrated best values when G906PJ were used as matrix. These results attest the good interfacial bonding between the matrix and fibres even in the absence of any interfacial modification. Vicat Softening Point and storage modules were also improved due to the reinforcing effect of fibres. Moreover, short-term tensile creep tests have proven that kenaf fibres remarkably improve the creep stability of composites. The creep behavior of the investigated materials was successfully modeled by the four parameters Burgers model.

Keywords: creep behaviour, kenaf fibres, mechanical properties, starch-grafted-polypropylene

Procedia PDF Downloads 224
2002 Reproducibility of Dopamine Transporter Density Measured with I-123-N-ω-Fluoropropyl-2β-Carbomethoxy-3β-(4-Iodophenyl)Nortropane SPECT in Phantom Studies and Parkinson’s Disease Patients

Authors: Yasuyuki Takahashi, Genta Hoshi, Kyoko Saito

Abstract:

Objectives: The objective of this study was to evaluate the reproducibility of I-123-N-ω-fluoropropyl-2β-carbomethoxy-3β-(4- iodophenyl) nortropane (I-123 FP-CIT) SPECT by using specific binding ratio (SBR) in phantom studies and Parkinson’s Disease (PD) patients. Methods: We made striatum phantom originally and confirmed reproducibility. The phantom studies changed head position and accumulation of FP-CIT, each. And image processing confirms influence on SBR by 30 cases. 30 PD received a SPECT for 3 hours post injection of I-123 FP-CIT 167MBq. Results: SBR decreased in rotatory direction by the patient position by the phantom studies. And, SBR improved the influence after the attenuation and the scatter correction in the cases (y=0.99x+0.57 r2=0.83). However, Stage II recognized dispersion in SBR by low accumulation. Conclusion: Than the phantom studies that assumed the normal cases, the SPECT image after the attenuation and scatter correction had better reproducibility.

Keywords: 123I-FP-CIT, specific binding ratio, Parkinson’s disease

Procedia PDF Downloads 422
2001 Metareasoning Image Optimization Q-Learning

Authors: Mahasa Zahirnia

Abstract:

The purpose of this paper is to explore new and effective ways of optimizing satellite images using artificial intelligence, and the process of implementing reinforcement learning to enhance the quality of data captured within the image. In our implementation of Bellman's Reinforcement Learning equations, associated state diagrams, and multi-stage image processing, we were able to enhance image quality, detect and define objects. Reinforcement learning is the differentiator in the area of artificial intelligence, and Q-Learning relies on trial and error to achieve its goals. The reward system that is embedded in Q-Learning allows the agent to self-evaluate its performance and decide on the best possible course of action based on the current and future environment. Results show that within a simulated environment, built on the images that are commercially available, the rate of detection was 40-90%. Reinforcement learning through Q-Learning algorithm is not just desired but required design criteria for image optimization and enhancements. The proposed methods presented are a cost effective method of resolving uncertainty of the data because reinforcement learning finds ideal policies to manage the process using a smaller sample of images.

Keywords: Q-learning, image optimization, reinforcement learning, Markov decision process

Procedia PDF Downloads 204
2000 Real-Time Lane Marking Detection Using Weighted Filter

Authors: Ayhan Kucukmanisa, Orhan Akbulut, Oguzhan Urhan

Abstract:

Nowadays, advanced driver assistance systems (ADAS) have become popular, since they enable safe driving. Lane detection is a vital step for ADAS. The performance of the lane detection process is critical to obtain a high accuracy lane departure warning system (LDWS). Challenging factors such as road cracks, erosion of lane markings, weather conditions might affect the performance of a lane detection system. In this paper, 1-D weighted filter based on row filtering to detect lane marking is proposed. 2-D input image is filtered by 1-D weighted filter considering four-pixel values located symmetrically around the center of candidate pixel. Performance evaluation is carried out by two metrics which are true positive rate (TPR) and false positive rate (FPR). Experimental results demonstrate that the proposed approach provides better lane marking detection accuracy compared to the previous methods while providing real-time processing performance.

Keywords: lane marking filter, lane detection, ADAS, LDWS

Procedia PDF Downloads 183
1999 Software-Defined Networks in Utility Power Networks

Authors: Ava Salmanpour, Hanieh Saeedi, Payam Rouhi, Elahe Hamzeil, Shima Alimohammadi, Siamak Hossein Khalaj, Mohammad Asadian

Abstract:

Software-defined network (SDN) is a network architecture designed to control network using software application in a central manner. This ability enables remote control of the whole network regardless of the network technology. In fact, in this architecture network intelligence is separated from physical infrastructure, it means that required network components can be implemented virtually using software applications. Today, power networks are characterized by a high range of complexity with a large number of intelligent devices, processing both huge amounts of data and important information. Therefore, reliable and secure communication networks are required. SDNs are the best choice to meet this issue. In this paper, SDN networks capabilities and characteristics will be reviewed and different basic controllers will be compared. The importance of using SDNs to escalate efficiency and reliability in utility power networks is going to be discussed and the comparison between the SDN-based power networks and traditional networks will be explained.

Keywords: software-defined network, SDNs, utility network, open flow, communication, gas and electricity, controller

Procedia PDF Downloads 100
1998 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format

Authors: Maryam Fallahpoor, Biswajeet Pradhan

Abstract:

Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.

Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format

Procedia PDF Downloads 70
1997 Improving the Security of Internet of Things Using Encryption Algorithms

Authors: Amirhossein Safi

Abstract:

Internet of things (IOT) is a kind of advanced information technology which has drawn societies’ attention. Sensors and stimulators are usually recognized as smart devices of our environment. Simultaneously, IOT security brings up new issues. Internet connection and possibility of interaction with smart devices cause those devices to involve more in human life. Therefore, safety is a fundamental requirement in designing IOT. IOT has three remarkable features: overall perception, reliable transmission, and intelligent processing. Because of IOT span, security of conveying data is an essential factor for system security. Hybrid encryption technique is a new model that can be used in IOT. This type of encryption generates strong security and low computation. In this paper, we have proposed a hybrid encryption algorithm which has been conducted in order to reduce safety risks and enhancing encryption's speed and less computational complexity. The purpose of this hybrid algorithm is information integrity, confidentiality, non-repudiation in data exchange for IOT. Eventually, the suggested encryption algorithm has been simulated by MATLAB software, and its speed and safety efficiency were evaluated in comparison with conventional encryption algorithm.

Keywords: internet of things, security, hybrid algorithm, privacy

Procedia PDF Downloads 450
1996 Wireless Based System for Continuous Electrocardiography Monitoring during Surgery

Authors: K. Bensafia, A. Mansour, G. Le Maillot, B. Clement, O. Reynet, P. Ariès, S. Haddab

Abstract:

This paper presents a system designed for wireless acquisition, the recording of electrocardiogram (ECG) signals and the monitoring of the heart’s health during surgery. This wireless recording system allows us to visualize and monitor the state of the heart’s health during a surgery, even if the patient is moved from the operating theater to post anesthesia care unit. The acquired signal is transmitted via a Bluetooth unit to a PC where the data are displayed, stored and processed. To test the reliability of our system, a comparison between ECG signals processed by a conventional ECG monitoring system (Datex-Ohmeda) and by our wireless system is made. The comparison is based on the shape of the ECG signal, the duration of the QRS complex, the P and T waves, as well as the position of the ST segments with respect to the isoelectric line. The proposed system is presented and discussed. The results have confirmed that the use of Bluetooth during surgery does not affect the devices used and vice versa. Pre- and post-processing steps are briefly discussed. Experimental results are also provided.

Keywords: electrocardiography, monitoring, surgery, wireless system

Procedia PDF Downloads 360
1995 Overview of Resources and Tools to Bridge Language Barriers Provided by the European Union

Authors: Barbara Heinisch, Mikael Snaprud

Abstract:

A common, well understood language is crucial in critical situations like landing a plane. For e-Government solutions, a clear and common language is needed to allow users to successfully complete transactions online. Misunderstandings here may not risk a safe landing but can cause delays, resubmissions and drive costs. This holds also true for higher education, where misunderstandings can also arise due to inconsistent use of terminology. Thus, language barriers are a societal challenge that needs to be tackled. The major means to bridge language barriers is translation. However, achieving high-quality translation and making texts understandable and accessible require certain framework conditions. Therefore, the EU and individual projects take (strategic) actions. These actions include the identification, collection, processing, re-use and development of language resources. These language resources may be used for the development of machine translation systems and the provision of (public) services including higher education. This paper outlines some of the existing resources and indicate directions for further development to increase the quality and usage of these resources.

Keywords: language resources, machine translation, terminology, translation

Procedia PDF Downloads 308
1994 Would Intra-Individual Variability in Attention to Be the Indicator of Impending the Senior Adults at Risk of Cognitive Decline: Evidence from Attention Network Test(ANT)

Authors: Hanna Lu, Sandra S. M. Chan, Linda C. W. Lam

Abstract:

Objectives: Intra-individual variability (IIV) has been considered as a biomarker of healthy ageing. However, the composite role of IIV in attention, as an impending indicator for neurocognitive disorders warrants further exploration. This study aims to investigate the IIV, as well as their relationships with attention network functions in adults with neurocognitive disorders (NCD). Methods: 36adults with NCD due to Alzheimer’s disease(NCD-AD), 31adults with NCD due to vascular disease (NCD-vascular), and 137 healthy controls were recruited. Intraindividual standard deviations (iSD) and intraindividual coefficient of variation of reaction time (ICV-RT) were used to evaluate the IIV. Results: NCD groups showed greater IIV (iSD: F= 11.803, p < 0.001; ICV-RT:F= 9.07, p < 0.001). In ROC analyses, the indices of IIV could differentiateNCD-AD (iSD: AUC value = 0.687, p= 0.001; ICV-RT: AUC value = 0.677, p= 0.001) and NCD-vascular (iSD: AUC value = 0.631, p= 0.023;ICV-RT: AUC value = 0.615, p= 0.045) from healthy controls. Moreover, the processing speed could distinguish NCD-AD from NCD-vascular (AUC value = 0.647, p= 0.040). Discussion: Intra-individual variability in attention provides a stable measure of cognitive performance, and seems to help distinguish the senior adults with different cognitive status.

Keywords: intra-individual variability, attention network, neurocognitive disorders, ageing

Procedia PDF Downloads 464
1993 Residual Modulus of Elasticity of Self-Compacting Concrete Incorporated Unprocessed Waste Fly Ash after Expose to the Elevated Temperature

Authors: Mohammed Abed, Rita Nemes, Salem Nehme

Abstract:

The present study experimentally investigated the impact of incorporating unprocessed waste fly ash (UWFA) on the residual mechanical properties of self-compacting concrete (SCC) after exposure to elevated temperature. Three mixtures of SCC have been produced by replacing the cement mass by 0%, 15% and 30% of UWFA. Generally, the fire resistance of SCC has been enhanced by replacing the cement up to 15% of UWFA, especially in case of residual modulus of elasticity which considers more sensitive than other mechanical properties at elevated temperature. However, a strong linear relationship has been observed between the residual flexural strength and modulus of elasticity, where both of them affected significantly by the cracks appearance and propagation as a result of elevated temperature. Sustainable products could be produced by incorporating unprocessed waste powder materials in the production of concrete, where the waste materials, CO2 emissions, and the energy needed for processing are reduced.

Keywords: self-compacting high-performance concrete, unprocessed waste fly ash, fire resistance, residual modulus of elasticity

Procedia PDF Downloads 127
1992 Characterization and Degradation Analysis of Tapioca Starch Based Biofilms

Authors: R. R. Ali, W. A. W. A. Rahman, R. M. Kasmani, H. Hasbullah, N. Ibrahim, A. N. Sadikin, U. A. Asli

Abstract:

In this study, tapioca starch which acts as natural polymer was added in the blend in order to produce biodegradable product. Low density polyethylene (LDPE) and tapioca starch blends were prepared by extrusion and the test sample by injection moulding process. Ethylene vinyl acetate (EVA) acts as compatibilizer while glycerol as processing aid was added in the blend. The blends were characterized by using melt flow index (MFI), fourier transform infrared (FTIR) and the effects of water absorption to the sample. As the starch content increased, MFI of the blend was decreased. Tensile testing were conducted shows the tensile strength and elongation at break decreased while the modulus increased as the starch increased. For the biodegradation, soil burial test was conducted and the loss in weight was studied as the starch content increased. Morphology studies were conducted in order to show the distribution between LDPE and starch.

Keywords: biopolymers, degradable polymers, starch based polyethylene, injection moulding

Procedia PDF Downloads 277
1991 Efficient Layout-Aware Pretraining for Multimodal Form Understanding

Authors: Armineh Nourbakhsh, Sameena Shah, Carolyn Rose

Abstract:

Layout-aware language models have been used to create multimodal representations for documents that are in image form, achieving relatively high accuracy in document understanding tasks. However, the large number of parameters in the resulting models makes building and using them prohibitive without access to high-performing processing units with large memory capacity. We propose an alternative approach that can create efficient representations without the need for a neural visual backbone. This leads to an 80% reduction in the number of parameters compared to the smallest SOTA model, widely expanding applicability. In addition, our layout embeddings are pre-trained on spatial and visual cues alone and only fused with text embeddings in downstream tasks, which can facilitate applicability to low-resource of multi-lingual domains. Despite using 2.5% of training data, we show competitive performance on two form understanding tasks: semantic labeling and link prediction.

Keywords: layout understanding, form understanding, multimodal document understanding, bias-augmented attention

Procedia PDF Downloads 137
1990 Effects of Long-Term Exposure of Cadmium to the Ovary of Lithobius forficatus (Myriapoda, Chilopoda)

Authors: Izabela Poprawa, Alina Chachulska-Zymelka, Lukasz Chajec, Grazyna Wilczek, Piotr Wilczek, Sebastian Student, Magdalena Rost-Roszkowska

Abstract:

Heavy metals polluting the environment, especially soil, have a harmful effect on organisms, because they can damage the organ structure, disturb their function and cause developmental disorders. They can affect not only the somatic tissues but also the germinal tissues. In the natural environment, plants and animals are exposed to short- and long-term exposure to these stressors, which have a major influence on the functioning of these organisms. Numerous animals have been treated as the bioindicators of the environment. Therefore, studies on any alterations caused by, e.g., heavy metals are in the center of interests of not only environmental but also medical and biological science. Myriapods are invertebrates which are bioindicators of the environment. One of the species which lives in the upper layers of soil, particularly under stones and rocks is Lithobius forficatus (Chilopoda), commonly known as the brown centipede or stone centipede. It is a European species of the family Lithobiidae. This centipede living in the soil is exposed to, e.g., heavy metals such as cadmium, lead, arsenic. The main goal of our project was to analyze the impact of long-term exposure to cadmium on the structure of ovary with the emphasis on the course of oogenesis. As the material for analysis of cadmium exposure to ovaries, we chose the centipede species, L. forficatus. Animals were divided into two experimental groups: C – the control group, the animals cultured in laboratory conditions in a horticultural soil; Cd2 – the animals cultured in a horticultural soil supplemented with 80 mg/kg (dry weight) of CdCl2 for 45 days – long-term exposure. Animals were fed with Acheta and Chironomus larvae maintained in tap water. The analyzes were carried out using transmission electron microscopy (TEM), flow cytometry and laser scanning (confocal) microscopy. Here we present the results of long-term exposure to cadmium concentration in soil on the organ responsible for female germ cell formation. Analysis with the use of the transmission electron microscope showed changes in the ultrastructure of both somatic and germ cells in the ovary. Moreover, quantitative analysis revealed the decrease in the percentage of cells viability, the increase in the percentage of cells with depolarized mitochondria and increasing the number of early apoptotic cells. All these changes were statistically significant compared to the control. Additionally, an increase in the ADP/ATP index was recorded. However, changes were not statistically significant to the control. Acknowledgment: The study has been financed by the National Science Centre, Poland, grant no 2017/25/B/NZ4/00420.

Keywords: cadmium, centipede, ovary, ultrastructure

Procedia PDF Downloads 106
1989 Automatic Method for Exudates and Hemorrhages Detection from Fundus Retinal Images

Authors: A. Biran, P. Sobhe Bidari, K. Raahemifar

Abstract:

Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels.

Keywords: diabetic retinopathy, fundus, CHT, exudates, hemorrhages

Procedia PDF Downloads 261
1988 Monocular 3D Person Tracking AIA Demographic Classification and Projective Image Processing

Authors: McClain Thiel

Abstract:

Object detection and localization has historically required two or more sensors due to the loss of information from 3D to 2D space, however, most surveillance systems currently in use in the real world only have one sensor per location. Generally, this consists of a single low-resolution camera positioned above the area under observation (mall, jewelry store, traffic camera). This is not sufficient for robust 3D tracking for applications such as security or more recent relevance, contract tracing. This paper proposes a lightweight system for 3D person tracking that requires no additional hardware, based on compressed object detection convolutional-nets, facial landmark detection, and projective geometry. This approach involves classifying the target into a demographic category and then making assumptions about the relative locations of facial landmarks from the demographic information, and from there using simple projective geometry and known constants to find the target's location in 3D space. Preliminary testing, although severely lacking, suggests reasonable success in 3D tracking under ideal conditions.

Keywords: monocular distancing, computer vision, facial analysis, 3D localization

Procedia PDF Downloads 130
1987 Using a Robot Companion to Detect and Visualize the Indicators of Dementia Progression and Quality of Life of People Aged 65 and Older

Authors: Jeoffrey Oostrom, Robbert James Schlingmann, Hani Alers

Abstract:

This document depicts the research into the indicators of dementia progression, the automation of quality of life assignments, and the visualization of it. To do this, the Smart Teddy project was initiated to make a smart companion that both monitors the senior citizen as well as processing the captured data into an insightful dashboard. With around 50 million diagnoses worldwide, dementia proves again and again to be a bothersome strain on the lives of many individuals, their relatives, and society as a whole. In 2015 it was estimated that dementia care cost 818 billion U.S Dollars globally. The Smart Teddy project aims to take away a portion of the burden from caregivers by automating the collection of certain data, like movement, geolocation, and sound-levels. This paper proves that the Smart Teddy has the potential to become a useful tool for caregivers but won’t pose as a solution. The Smart Teddy still faces some problems in terms of emotional privacy, but its non-intrusive nature, as well as diversity in usability, can make up for it.

Keywords: dementia care, medical data visualization, quality of life, smart companion

Procedia PDF Downloads 128
1986 Development of a Wind Resource Assessment Framework Using Weather Research and Forecasting (WRF) Model, Python Scripting and Geographic Information Systems

Authors: Jerome T. Tolentino, Ma. Victoria Rejuso, Jara Kaye Villanueva, Loureal Camille Inocencio, Ma. Rosario Concepcion O. Ang

Abstract:

Wind energy is rapidly emerging as the primary source of electricity in the Philippines, although developing an accurate wind resource model is difficult. In this study, Weather Research and Forecasting (WRF) Model, an open source mesoscale Numerical Weather Prediction (NWP) model, was used to produce a 1-year atmospheric simulation with 4 km resolution on the Ilocos Region of the Philippines. The WRF output (netCDF) extracts the annual mean wind speed data using a Python-based Graphical User Interface. Lastly, wind resource assessment was produced using a GIS software. Results of the study showed that it is more flexible to use Python scripts than using other post-processing tools in dealing with netCDF files. Using WRF Model, Python, and Geographic Information Systems, a reliable wind resource map is produced.

Keywords: wind resource assessment, weather research and forecasting (WRF) model, python, GIS software

Procedia PDF Downloads 434
1985 Simulation Study of the Microwave Heating of the Hematite and Coal Mixture

Authors: Prasenjit Singha, Sunil Yadav, Soumya Ranjan Mohantry, Ajay Kumar Shukla

Abstract:

Temperature distribution in the hematite ore mixed with 7.5% coal was predicted by solving a 1-D heat conduction equation using an implicit finite difference approach. In this work, it was considered a square slab of 20 cm x 20 cm, which assumed the coal to be uniformly mixed with hematite ore. It was solved the equations with the use of MATLAB 2018a software. Heat transfer effects in this 1D dimensional slab convective and the radiative boundary conditions are also considered. Temperature distribution obtained inside hematite slab by considering microwave heating time, thermal conductivity, heat capacity, carbon percentage, sample dimensions, and many other factors such as penetration depth, permittivity, and permeability of coal and hematite ore mixtures. The resulting temperature profile can be used as a guiding tool for optimizing the microwave-assisted carbothermal reduction process of hematite slab was extended to other dimensions as well, viz., 1 cm x 1 cm, 5 cm x 5 cm, 10 cm x 10 cm, 20 cm x 20 cm. The model predictions are in good agreement with experimental results.

Keywords: hematite ore, coal, microwave processing, heat transfer, implicit method, temperature distribution

Procedia PDF Downloads 149
1984 Early Detection of Lymphedema in Post-Surgery Oncology Patients

Authors: Sneha Noble, Rahul Krishnan, Uma G., D. K. Vijaykumar

Abstract:

Breast-Cancer related Lymphedema is a major problem that affects many women. Lymphedema is the swelling that generally occurs in the arms or legs caused by the removal of or damage to lymph nodes as a part of cancer treatment. Treating it at the earliest possible stage is the best way to manage the condition and prevent it from leading to pain, recurrent infection, reduced mobility, and impaired function. So, this project aims to focus on the multi-modal approaches to identify the risks of Lymphedema in post-surgical oncology patients and prevent it at the earliest. The Kinect IR Sensor is utilized to capture the images of the body and after image processing techniques, the region of interest is obtained. Then, performing the voxelization method will provide volume measurements in pre-operative and post-operative periods in patients. The formation of a mathematical model will help in the comparison of values. Clinical pathological data of patients will be investigated to assess the factors responsible for the development of lymphedema and its risks.

Keywords: Kinect IR sensor, Lymphedema, voxelization, lymph nodes

Procedia PDF Downloads 123
1983 Getting to Know the Types of Asphalt, Its Manufacturing and Processing Methods and Its Application in Road Construction

Authors: Hamid Fallah

Abstract:

Asphalt is generally a mixture of stone materials with continuous granulation and a binder, which is usually bitumen. Asphalt is made in different shapes according to its use. The most familiar type of asphalt is hot asphalt or hot asphalt concrete. Stone materials usually make up more than 90% of the asphalt mixture. Therefore, stone materials have a significant impact on the quality of the resulting asphalt. According to the method of application and mixing, asphalt is divided into three categories: hot asphalt, protective asphalt, and cold asphalt. Cold mix asphalt is a mixture of stone materials and mixed bitumen or bitumen emulsion whose raw materials are mixed at ambient temperature. In some types of cold asphalt, the bitumen may be heated as necessary, but other materials are mixed with the bitumen without heating. Protective asphalts are used to make the roadbed impermeable, increase its abrasion and sliding resistance, and also temporarily improve the existing asphalt and concrete surfaces. This type of paving is very economical compared to hot asphalt due to the speed and ease of implementation and the limited need for asphalt machines and equipment. The present article, which is prepared in descriptive library form, introduces asphalt, its types, characteristics, and its application.

Keywords: asphalt, type of asphalt, asphalt concrete, sulfur concrete, bitumen in asphalt, sulfur, stone materials

Procedia PDF Downloads 52
1982 Clinical Validation of C-PDR Methodology for Accurate Non-Invasive Detection of Helicobacter pylori Infection

Authors: Suman Som, Abhijit Maity, Sunil B. Daschakraborty, Sujit Chaudhuri, Manik Pradhan

Abstract:

Background: Helicobacter pylori is a common and important human pathogen and the primary cause of peptic ulcer disease and gastric cancer. Currently H. pylori infection is detected by both invasive and non-invasive way but the diagnostic accuracy is not up to the mark. Aim: To set up an optimal diagnostic cut-off value of 13C-Urea Breath Test to detect H. pylori infection and evaluate a novel c-PDR methodology to overcome of inconclusive grey zone. Materials and Methods: All 83 subjects first underwent upper-gastrointestinal endoscopy followed by rapid urease test and histopathology and depending on these results; we classified 49 subjects as H. pylori positive and 34 negative. After an overnight, fast patients are taken 4 gm of citric acid in 200 ml water solution and 10 minute after ingestion of the test meal, a baseline exhaled breath sample was collected. Thereafter an oral dose of 75 mg 13C-Urea dissolved in 50 ml water was given and breath samples were collected upto 90 minute for 15 minute intervals and analysed by laser based high precisional cavity enhanced spectroscopy. Results: We studied the excretion kinetics of 13C isotope enrichment (expressed as δDOB13C ‰) of exhaled breath samples and found maximum enrichment around 30 minute of H. pylori positive patients, it is due to the acid mediated stimulated urease enzyme activity and maximum acidification happened within 30 minute but no such significant isotopic enrichment observed for H. pylori negative individuals. Using Receiver Operating Characteristic (ROC) curve an optimal diagnostic cut-off value, δDOB13C ‰ = 3.14 was determined at 30 minute exhibiting 89.16% accuracy. Now to overcome grey zone problem we explore percentage dose of 13C recovered per hour, i.e. 13C-PDR (%/hr) and cumulative percentage dose of 13C recovered, i.e. c-PDR (%) in exhaled breath samples for the present 13C-UBT. We further explored the diagnostic accuracy of 13C-UBT by constructing ROC curve using c-PDR (%) values and an optimal cut-off value was estimated to be c-PDR = 1.47 (%) at 60 minute, exhibiting 100 % diagnostic sensitivity , 100 % specificity and 100 % accuracy of 13C-UBT for detection of H. pylori infection. We also elucidate the gastric emptying process of present 13C-UBT for H. pylori positive patients. The maximal emptying rate found at 36 minute and half empting time of present 13C-UBT was found at 45 minute. Conclusions: The present study exhibiting the importance of c-PDR methodology to overcome of grey zone problem in 13C-UBT for accurate determination of infection without any risk of diagnostic errors and making it sufficiently robust and novel method for an accurate and fast non-invasive diagnosis of H. pylori infection for large scale screening purposes.

Keywords: 13C-Urea breath test, c-PDR methodology, grey zone, Helicobacter pylori

Procedia PDF Downloads 297
1981 Diversity in Finance Literature Revealed through the Lens of Machine Learning: A Topic Modeling Approach on Academic Papers

Authors: Oumaima Lahmar

Abstract:

This paper aims to define a structured topography for finance researchers seeking to navigate the body of knowledge in their extrapolation of finance phenomena. To make sense of the body of knowledge in finance, a probabilistic topic modeling approach is applied on 6000 abstracts of academic articles published in three top journals in finance between 1976 and 2020. This approach combines both machine learning techniques and natural language processing to statistically identify the conjunctions between research articles and their shared topics described each by relevant keywords. The topic modeling analysis reveals 35 coherent topics that can well depict finance literature and provide a comprehensive structure for the ongoing research themes. Comparing the extracted topics to the Journal of Economic Literature (JEL) classification system, a significant similarity was highlighted between the characterizing keywords. On the other hand, we identify other topics that do not match the JEL classification despite being relevant in the finance literature.

Keywords: finance literature, textual analysis, topic modeling, perplexity

Procedia PDF Downloads 157
1980 A Review on the Adoption and Acculturation of Digital Technologies among Farmers of Haryana State

Authors: Manisha Ohlan, Manju Dahiya

Abstract:

The present study was conducted in Karnal, Rohtak, and Jhajjar districts of Haryana state, covering 360 respondents. Results showed that 42.78 percent of the respondents had above average knowledge at the preparation stage followed by 48.33 percent of the respondents who had high knowledge at the production stage, and 37.22 percent of the respondents had average knowledge at the processing stage regarding the usage of digital technologies. Nearly half of the respondents (47.50%) agreed with the usage of digital technologies, followed by strongly agreed (19.45%) and strongly disagreed (14.45%). A significant and positive relationship was found between independent variables and knowledge and of digital technologies at 5 percent level of significance. Therefore, the null hypothesis cannot be rejected. All the dependent variables, including knowledge and attitude, had a significant and positive relationship with z value at 5 percent level of significance, which showed that it is between -1.96 to +1.96; therefore, the data falls between the acceptance region, that’s why the null hypothesis is accepted.

Keywords: knowledge, attitude, digital technologies, significant, positive relationship

Procedia PDF Downloads 85
1979 Design of a Controlled BHJ Solar Cell Using Modified Organic Vapor Spray Deposition Technique

Authors: F. Stephen Joe, V. Sathya Narayanan, V. R. Sanal Kumar

Abstract:

A comprehensive review of the literature on photovoltaic cells has been carried out for exploring the better options for cost efficient technologies for future solar cell applications. Literature review reveals that the Bulk Heterojunction (BHJ) Polymer Solar cells offer special opportunities as renewable energy resources. It is evident from the previous studies that the device fabricated with TiOx layer shows better power conversion efficiency than that of the device without TiOx layer. In this paper, authors designed a controlled BHJ solar cell using a modified organic vapor spray deposition technique facilitated with a vertical-moving gun named as 'Stephen Joe Technique' for getting a desirable surface pattern over the substrate to improving its efficiency over the years for industrial applications. We comprehended that the efficient processing and the interface engineering of these solar cells could increase the efficiency up to 5-10 %.

Keywords: BHJ polymer solar cell, photovoltaic cell, solar cell, Stephen Joe technique

Procedia PDF Downloads 529
1978 Geographic Information System for Simulating Air Traffic By Applying Different Multi-Radar Positioning Techniques

Authors: Amara Rafik, Mostefa Belhadj Aissa

Abstract:

Radar data is one of the many data sources used by ATM Air Traffic Management systems. These data come from air navigation radar antennas. These radars intercept signals emitted by the various aircraft crossing the controlled airspace and calculate the position of these aircraft and retransmit their positions to the Air Traffic Management System. For greater reliability, these radars are positioned in such a way as to allow their coverage areas to overlap. An aircraft will therefore be detected by at least one of these radars. However, the position coordinates of the same aircraft and sent by these different radars are not necessarily identical. Therefore, the ATM system must calculate a single position (radar track) which will ultimately be sent to the control position and displayed on the air traffic controller's monitor. There are several techniques for calculating the radar track. Furthermore, the geographical nature of the problem requires the use of a Geographic Information System (GIS), i.e. a geographical database on the one hand and geographical processing. The objective of this work is to propose a GIS for traffic simulation which reconstructs the evolution over time of aircraft positions from a multi-source radar data set and by applying these different techniques.

Keywords: ATM, GIS, radar data, simulation

Procedia PDF Downloads 101