Search results for: deep convolutional neural networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5320

Search results for: deep convolutional neural networks

3250 Internet Use, Social Networks, Loneliness and Quality of Life among Adults Aged 50 and Older: Mediating and Moderating Effects

Authors: Rabia Khaliala, Adi Vitman-Schorr

Abstract:

Background: The increase in longevity of people on one hand, and on the other hand the fact that the social networks in later life become increasingly narrower, highlight the importance of Internet use to enhance quality of life (QoL). However, whether Internet use increases or decreases social networks, loneliness and quality of life is not clear-cut. Purposes: To explore the direct and/or indirect effects of Internet use on QoL, and to examine whether ethnicity and time the elderly spent with family moderate the mediation effect of Internet use on quality of life throughout loneliness. Methods: This descriptive-correlational study was carried out in 2016 by structured interviews with a convenience sample of 502 respondents aged 50 and older, living in northern Israel. Bootstrapping with resampling strategies was used for testing mediation a model. Results: Use of the Internet was found to be positively associated with QoL. However, this relationship was mediated by loneliness, and moderated by the time the elderly spent with family members. In addition, respondents' ethnicity significantly moderated the mediation effect between Internet use and loneliness. Conclusions: Internet use can enhance QoL of older adults directly or indirectly by reducing loneliness. However, these effects are conditional on other variables. The indirect effect moderated by ethnicity, and the direct effect moderated by the time the elderly spend with their families. Researchers and practitioners should be aware of these interactions which can impact loneliness and quality of life of older persons differently.

Keywords: internet use, loneliness, quality of life, social contacts

Procedia PDF Downloads 187
3249 Improving the Penalty-free Multi-objective Evolutionary Design Optimization of Water Distribution Systems

Authors: Emily Kambalame

Abstract:

Water distribution networks necessitate many investments for construction, prompting researchers to seek cost reduction and efficient design solutions. Optimization techniques are employed in this regard to address these challenges. In this context, the penalty-free multi-objective evolutionary algorithm (PFMOEA) coupled with pressure-dependent analysis (PDA) was utilized to develop a multi-objective evolutionary search for the optimization of water distribution systems (WDSs). The aim of this research was to find out if the computational efficiency of the PFMOEA for WDS optimization could be enhanced. This was done by applying real coding representation and retaining different percentages of feasible and infeasible solutions close to the Pareto front in the elitism step of the optimization. Two benchmark network problems, namely the Two-looped and Hanoi networks, were utilized in the study. A comparative analysis was then conducted to assess the performance of the real-coded PFMOEA in relation to other approaches described in the literature. The algorithm demonstrated competitive performance for the two benchmark networks by implementing real coding. The real-coded PFMOEA achieved the novel best-known solutions ($419,000 and $6.081 million) and a zero-pressure deficit for the two networks, requiring fewer function evaluations than the binary-coded PFMOEA. In previous PFMOEA studies, elitism applied a default retention of 30% of the least cost-feasible solutions while excluding all infeasible solutions. It was found in this study that by replacing 10% and 15% of the feasible solutions with infeasible ones that are close to the Pareto front with minimal pressure deficit violations, the computational efficiency of the PFMOEA was significantly enhanced. The configuration of 15% feasible and 15% infeasible solutions outperformed other retention allocations by identifying the optimal solution with the fewest function evaluation

Keywords: design optimization, multi-objective evolutionary, penalty-free, water distribution systems

Procedia PDF Downloads 63
3248 Ant Lion Optimization in a Fuzzy System for Benchmark Control Problem

Authors: Leticia Cervantes, Edith Garcia, Oscar Castillo

Abstract:

At today, there are several control problems where the main objective is to obtain the best control in the study to decrease the error in the application. Many techniques can use to control these problems such as Neural Networks, PID control, Fuzzy Logic, Optimization techniques and many more. In this case, fuzzy logic with fuzzy system and an optimization technique are used to control the case of study. In this case, Ant Lion Optimization is used to optimize a fuzzy system to control the velocity of a simple treadmill. The main objective is to achieve the control of the velocity in the control problem using the ALO optimization. First, a simple fuzzy system was used to control the velocity of the treadmill it has two inputs (error and error change) and one output (desired speed), then results were obtained but to decrease the error the ALO optimization was developed to optimize the fuzzy system of the treadmill. Having the optimization, the simulation was performed, and results can prove that using the ALO optimization the control of the velocity was better than a conventional fuzzy system. This paper describes some basic concepts to help to understand the idea in this work, the methodology of the investigation (control problem, fuzzy system design, optimization), the results are presented and the optimization is used for the fuzzy system. A comparison between the simple fuzzy system and the optimized fuzzy systems are presented where it can be proving the optimization improved the control with good results the major findings of the study is that ALO optimization is a good alternative to improve the control because it helped to decrease the error in control applications even using any control technique to optimized, As a final statement is important to mentioned that the selected methodology was good because the control of the treadmill was improve using the optimization technique.

Keywords: ant lion optimization, control problem, fuzzy control, fuzzy system

Procedia PDF Downloads 403
3247 The Effect of Technology on Legal Securities and Privacy Issues

Authors: Nancy Samuel Reyad Farhan

Abstract:

even though international crook law has grown considerably inside the ultimate decades, it still remains fragmented and lacks doctrinal cohesiveness. Its idea is defined within the doctrine as pretty disputable. there is no concrete definition of the term. in the home doctrine, the hassle of crook law troubles that rise up within the worldwide setting, and international troubles that get up in the national crook regulation, is underdeveloped each theoretically and nearly. To the exceptional of writer’s know-how, there aren't any studies describing worldwide elements of crook law in a complete way, taking a more expansive view of the difficulty. This paper provides consequences of a part of the doctoral studies, assignment a theoretical framework of the worldwide crook law. It ambitions at checking out the present terminology on international components of criminal law. It demonstrates differences among the notions of global crook regulation, criminal regulation international and law worldwide crook. It confronts the belief of crook regulation with associated disciplines and indicates their interplay. It specifies the scope of international criminal regulation. It diagnoses the contemporary criminal framework of global components of criminal regulation, referring to each crook law issues that rise up inside the international setting, and international problems that rise up within the context of national criminal law. ultimately, de lege lata postulates had been formulated and route of modifications in global criminal law turned into proposed. The followed studies hypothesis assumed that the belief of international criminal regulation became inconsistent, not understood uniformly, and there has been no conformity as to its location inside the system of regulation, objective and subjective scopes, while the domestic doctrine did not correspond with international requirements and differed from the global doctrine. applied research strategies covered inter alia a dogmatic and legal technique, an analytical technique, a comparative approach, in addition to desk studies.

Keywords: social networks privacy issues, social networks security issues, social networks privacy precautions measures, social networks security precautions measures

Procedia PDF Downloads 35
3246 A Comparative Study of Twin Delayed Deep Deterministic Policy Gradient and Soft Actor-Critic Algorithms for Robot Exploration and Navigation in Unseen Environments

Authors: Romisaa Ali

Abstract:

This paper presents a comparison between twin-delayed Deep Deterministic Policy Gradient (TD3) and Soft Actor-Critic (SAC) reinforcement learning algorithms in the context of training robust navigation policies for Jackal robots. By leveraging an open-source framework and custom motion control environments, the study evaluates the performance, robustness, and transferability of the trained policies across a range of scenarios. The primary focus of the experiments is to assess the training process, the adaptability of the algorithms, and the robot’s ability to navigate in previously unseen environments. Moreover, the paper examines the influence of varying environmental complexities on the learning process and the generalization capabilities of the resulting policies. The results of this study aim to inform and guide the development of more efficient and practical reinforcement learning-based navigation policies for Jackal robots in real-world scenarios.

Keywords: Jackal robot environments, reinforcement learning, TD3, SAC, robust navigation, transferability, custom environment

Procedia PDF Downloads 106
3245 End-to-End Pyramid Based Method for Magnetic Resonance Imaging Reconstruction

Authors: Omer Cahana, Ofer Levi, Maya Herman

Abstract:

Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.

Keywords: magnetic resonance imaging, image reconstruction, pyramid network, deep learning

Procedia PDF Downloads 92
3244 A Top-down vs a Bottom-up Approach on Lower Extremity Motor Recovery and Balance Following Acute Stroke: A Randomized Clinical Trial

Authors: Vijaya Kumar, Vidayasagar Pagilla, Abraham Joshua, Rakshith Kedambadi, Prasanna Mithra

Abstract:

Background: Post stroke rehabilitation are aimed to accelerate for optimal sensorimotor recovery, functional gain and to reduce long-term dependency. Intensive physical therapy interventions can enhance this recovery as experience-dependent neural plastic changes either directly act at cortical neural networks or at distal peripheral level (muscular components). Neuromuscular Electrical Stimulation (NMES), a traditional bottom-up approach, mirror therapy (MT), a relatively new top down approach have found to be an effective adjuvant treatment methods for lower extremity motor and functional recovery in stroke rehabilitation. However there is a scarcity of evidence to compare their therapeutic gain in stroke recovery.Aim: To compare the efficacy of neuromuscular electrical stimulation (NMES) and mirror therapy (MT) in very early phase of post stroke rehabilitation addressed to lower extremity motor recovery and balance. Design: observer blinded Randomized Clinical Trial. Setting: Neurorehabilitation Unit, Department of Physical Therapy, Tertiary Care Hospitals. Subjects: 32 acute stroke subjects with first episode of unilateral stroke with hemiparesis, referred for rehabilitation (onset < 3 weeks), Brunnstorm lower extremity recovery stages ≥3 and MMSE score more than 24 were randomized into two group [Group A-NMES and Group B-MT]. Interventions: Both the groups received eclectic approach to remediate lower extremity recovery which includes treatment components of Roods, Bobath and Motor learning approaches for 30 minutes a day for 6 days. Following which Group A (N=16) received 30 minutes of surface NMES training for six major paretic muscle groups (gluteus maximus and medius,quadriceps, hamstrings, tibialis anterior and gastrocnemius). Group B (N=16) was administered with 30 minutes of mirror therapy sessions to facilitate lower extremity motor recovery. Outcome measures: Lower extremity motor recovery, balance and activities of daily life (ADLs) were measured by Fugyl Meyer Assessment (FMA-LE), Berg Balance Scale (BBS), Barthel Index (BI) before and after intervention. Results: Pre Post analysis of either group across the time revealed statistically significant improvement (p < 0.001) for all the outcome variables for the either group. All parameters of NMES had greater change scores compared to MT group as follows: FMA-LE (25.12±3.01 vs. 23.31±2.38), BBS (35.12±4.61 vs. 34.68±5.42) and BI (40.00±10.32 vs. 37.18±7.73). Between the groups comparison of pre post values showed no significance with FMA-LE (p=0.09), BBS (p=0.80) and BI (p=0.39) respectively. Conclusion: Though either groups had significant improvement (pre to post intervention), none of them were superior to other in lower extremity motor recovery and balance among acute stroke subjects. We conclude that eclectic approach is an effective treatment irrespective of NMES or MT as an adjunct.

Keywords: balance, motor recovery, mirror therapy, neuromuscular electrical stimulation, stroke

Procedia PDF Downloads 283
3243 Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation

Authors: Min L. Stewart, Patrick Johnston

Abstract:

Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism.

Keywords: hierarchical visual processing, face processing, perceptual hierarchy, prediction error, predictive coding

Procedia PDF Downloads 112
3242 Implementation of Data Science in Field of Homologation

Authors: Shubham Bhonde, Nekzad Doctor, Shashwat Gawande

Abstract:

For the use and the import of Keys and ID Transmitter as well as Body Control Modules with radio transmission in a lot of countries, homologation is required. Final deliverables in homologation of the product are certificates. In considering the world of homologation, there are approximately 200 certificates per product, with most of the certificates in local languages. It is challenging to manually investigate each certificate and extract relevant data from the certificate, such as expiry date, approval date, etc. It is most important to get accurate data from the certificate as inaccuracy may lead to missing re-homologation of certificates that will result in an incompliance situation. There is a scope of automation in reading the certificate data in the field of homologation. We are using deep learning as a tool for automation. We have first trained a model using machine learning by providing all country's basic data. We have trained this model only once. We trained the model by feeding pdf and jpg files using the ETL process. Eventually, that trained model will give more accurate results later. As an outcome, we will get the expiry date and approval date of the certificate with a single click. This will eventually help to implement automation features on a broader level in the database where certificates are stored. This automation will help to minimize human error to almost negligible.

Keywords: homologation, re-homologation, data science, deep learning, machine learning, ETL (extract transform loading)

Procedia PDF Downloads 164
3241 Design and Implementation of a Cross-Network Security Management System

Authors: Zhiyong Shan, Preethi Santhanam, Vinod Namboodiri, Rajiv Bagai

Abstract:

In recent years, the emerging network worms and attacks have distributive characteristics, which can spread globally in a very short time. Security management crossing networks to co-defense network-wide attacks and improve the efficiency of security administration is urgently needed. We propose a hierarchical distributed network security management system (HD-NSMS), which can integrate security management across multiple networks. First, we describe the system in macrostructure and microstructure; then discuss three key problems when building HD-NSMS: device model, alert mechanism, and emergency response mechanism; lastly, we describe the implementation of HD-NSMS. The paper is valuable for implementing NSMS in that it derives from a practical network security management system (NSMS).

Keywords: network security management, device organization, emergency response, cross-network

Procedia PDF Downloads 169
3240 F-VarNet: Fast Variational Network for MRI Reconstruction

Authors: Omer Cahana, Maya Herman, Ofer Levi

Abstract:

Magnetic resonance imaging (MRI) is a long medical scan that stems from a long acquisition time. This length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach, such as compress sensing (CS) or parallel imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. In order to achieve that, two properties have to exist: i) the signal must be sparse under a known transform domain, ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm needs to be applied to recover the signal. While the rapid advance in the deep learning (DL) field, which has demonstrated tremendous successes in various computer vision task’s, the field of MRI reconstruction is still in an early stage. In this paper, we present an extension of the state-of-the-art model in MRI reconstruction -VarNet. We utilize VarNet by using dilated convolution in different scales, which extends the receptive field to capture more contextual information. Moreover, we simplified the sensitivity map estimation (SME), for it holds many unnecessary layers for this task. Those improvements have shown significant decreases in computation costs as well as higher accuracy.

Keywords: MRI, deep learning, variational network, computer vision, compress sensing

Procedia PDF Downloads 165
3239 Iterative Segmentation and Application of Hausdorff Dilation Distance in Defect Detection

Authors: S. Shankar Bharathi

Abstract:

Inspection of surface defects on metallic components has always been challenging due to its specular property. Occurrences of defects such as scratches, rust, pitting are very common in metallic surfaces during the manufacturing process. These defects if unchecked can hamper the performance and reduce the life time of such component. Many of the conventional image processing algorithms in detecting the surface defects generally involve segmentation techniques, based on thresholding, edge detection, watershed segmentation and textural segmentation. They later employ other suitable algorithms based on morphology, region growing, shape analysis, neural networks for classification purpose. In this paper the work has been focused only towards detecting scratches. Global and other thresholding techniques were used to extract the defects, but it proved to be inaccurate in extracting the defects alone. However, this paper does not focus on comparison of different segmentation techniques, but rather describes a novel approach towards segmentation combined with hausdorff dilation distance. The proposed algorithm is based on the distribution of the intensity levels, that is, whether a certain gray level is concentrated or evenly distributed. The algorithm is based on extraction of such concentrated pixels. Defective images showed higher level of concentration of some gray level, whereas in non-defective image, there seemed to be no concentration, but were evenly distributed. This formed the basis in detecting the defects in the proposed algorithm. Hausdorff dilation distance based on mathematical morphology was used to strengthen the segmentation of the defects.

Keywords: metallic surface, scratches, segmentation, hausdorff dilation distance, machine vision

Procedia PDF Downloads 429
3238 An Integrated Framework for Seismic Risk Mitigation Decision Making

Authors: Mojtaba Sadeghi, Farshid Baniassadi, Hamed Kashani

Abstract:

One of the challenging issues faced by seismic retrofitting consultants and employers is quick decision-making on the demolition or retrofitting of a structure at the current time or in the future. For this reason, the existing models proposed by researchers have only covered one of the aspects of cost, execution method, and structural vulnerability. Given the effect of each factor on the final decision, it is crucial to devise a new comprehensive model capable of simultaneously covering all the factors. This study attempted to provide an integrated framework that can be utilized to select the most appropriate earthquake risk mitigation solution for buildings. This framework can overcome the limitations of current models by taking into account several factors such as cost, execution method, risk-taking and structural failure. In the newly proposed model, the database and essential information about retrofitting projects are developed based on the historical data on a retrofit project. In the next phase, an analysis is conducted in order to assess the vulnerability of the building under study. Then, artificial neural networks technique is employed to calculate the cost of retrofitting. While calculating the current price of the structure, an economic analysis is conducted to compare demolition versus retrofitting costs. At the next stage, the optimal method is identified. Finally, the implementation of the framework was demonstrated by collecting data concerning 155 previous projects.

Keywords: decision making, demolition, construction management, seismic retrofit

Procedia PDF Downloads 240
3237 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 204
3236 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models

Authors: Danielle Shackley, Yetunde Folajimi

Abstract:

As more people turn to the internet seeking health-related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores to text, ranging from positive, neutral, and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing and tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial, and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced, and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process and substituting the Naive Bayes for a deep learning neural network model.

Keywords: sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model

Procedia PDF Downloads 99
3235 AI-Based Autonomous Plant Health Monitoring and Control System with Visual Health-Scoring Models

Authors: Uvais Qidwai, Amor Moursi, Mohamed Tahar, Malek Hamad, Hamad Alansi

Abstract:

This paper focuses on the development and implementation of an advanced plant health monitoring system with an AI backbone and IoT sensory network. Our approach involves addressing the critical environmental factors essential for preserving a plant’s well-being, including air temperature, soil moisture, soil temperature, soil conductivity, pH, water levels, and humidity, as well as the presence of essential nutrients like nitrogen, phosphorus, and potassium. Central to our methodology is the utilization of computer vision technology, particularly a night vision camera. The captured data is then compared against a reference database containing different health statuses. This comparative analysis is implemented using an AI deep learning model, which enables us to generate accurate assessments of plant health status. By combining the AI-based decision-making approach, our system aims to provide precise and timely insights into the overall health and well-being of plants, offering a valuable tool for effective plant care and management.

Keywords: deep learning image model, IoT sensing, cloud-based analysis, remote monitoring app, computer vision, fuzzy control

Procedia PDF Downloads 59
3234 Shoring System Selection for Deep Excavation

Authors: Faouzi Ahtchi-Ali, Marcus Vitiello

Abstract:

A study was conducted in the east region of the Middle East to assess the constructability of a shoring system for a 12-meter deep excavation. Several shoring systems were considered in this study including secant concrete piling, contiguous concrete piling, and sheet-piling. The excavation was carried out in a very dense sand with the groundwater level located at 3 meters below ground surface. The study included conducting a pilot test for each shoring system listed above. The secant concrete piling included overlapping concrete piles to a depth of 16 meters. Drilling method with full steel casing was utilized to install the concrete piles. The verticality of the piles was a concern for the overlap. The contiguous concrete piling required the installation of micro-piles to seal the gap between the concrete piles. This method revealed that the gap between the piles was not fully sealed as observed by the groundwater penetration to the excavation. The sheet-piling method required pre-drilling due to the high blow count of the penetrated layer of saturated sand. This study concluded that the sheet-piling method with pre-drilling was the most cost effective and recommended a method for the shoring system.

Keywords: excavation, shoring system, middle east, Drilling method

Procedia PDF Downloads 468
3233 Early Detection of Breast Cancer in Digital Mammograms Based on Image Processing and Artificial Intelligence

Authors: Sehreen Moorat, Mussarat Lakho

Abstract:

A method of artificial intelligence using digital mammograms data has been proposed in this paper for detection of breast cancer. Many researchers have developed techniques for the early detection of breast cancer; the early diagnosis helps to save many lives. The detection of breast cancer through mammography is effective method which detects the cancer before it is felt and increases the survival rate. In this paper, we have purposed image processing technique for enhancing the image to detect the graphical table data and markings. Texture features based on Gray-Level Co-Occurrence Matrix and intensity based features are extracted from the selected region. For classification purpose, neural network based supervised classifier system has been used which can discriminate between benign and malignant. Hence, 68 digital mammograms have been used to train the classifier. The obtained result proved that automated detection of breast cancer is beneficial for early diagnosis and increases the survival rates of breast cancer patients. The proposed system will help radiologist in the better interpretation of breast cancer.

Keywords: medical imaging, cancer, processing, neural network

Procedia PDF Downloads 261
3232 Metal Extraction into Ionic Liquids and Hydrophobic Deep Eutectic Mixtures

Authors: E. E. Tereshatov, M. Yu. Boltoeva, V. Mazan, M. F. Volia, C. M. Folden III

Abstract:

Room temperature ionic liquids (RTILs) are a class of liquid organic salts with melting points below 20 °C that are considered to be environmentally friendly ‘designers’ solvents. Pure hydrophobic ILs are known to extract metallic species from aqueous solutions. The closest analogues of ionic liquids are deep eutectic solvents (DESs), which are a eutectic mixture of at least two compounds with a melting point lower than that of each individual component. DESs are acknowledged to be attractive for organic synthesis and metal processing. Thus, these non-volatile and less toxic compounds are of interest for critical metal extraction. The US Department of Energy and the European Commission consider indium as a key metal. Its chemical homologue, thallium, is also an important material for some applications and environmental safety. The aim of this work is to systematically investigate In and Tl extraction from aqueous solutions into pure fluorinated ILs and hydrophobic DESs. The dependence of the Tl extraction efficiency on the structure and composition of the ionic liquid ions, metal oxidation state, and initial metal and aqueous acid concentrations have been studied. The extraction efficiency of the TlXz3–z anionic species (where X = Cl– and/or Br–) is greater for ionic liquids with more hydrophobic cations. Unexpectedly high distribution ratios (> 103) of Tl(III) were determined even by applying a pure ionic liquid as receiving phase. An improved mathematical model based on ion exchange and ion pair formation mechanisms has been developed to describe the co-extraction of two different anionic species, and the relative contributions of each mechanism have been determined. The first evidence of indium extraction into new quaternary ammonium- and menthol-based hydrophobic DESs from hydrochloric and oxalic acid solutions with distribution ratios up to 103 will be provided. Data obtained allow us to interpret the mechanism of thallium and indium extraction into ILs and DESs media. The understanding of Tl and In chemical behavior in these new media is imperative for the further improvement of separation and purification of these elements.

Keywords: deep eutectic solvents, indium, ionic liquids, thallium

Procedia PDF Downloads 242
3231 Artificial Intelligence in the Design of High-Strength Recycled Concrete

Authors: Hadi Rouhi Belvirdi, Davoud Beheshtizadeh

Abstract:

The increasing demand for sustainable construction materials has led to a growing interest in high-strength recycled concrete (HSRC). Utilizing recycled materials not only reduces waste but also minimizes the depletion of natural resources. This study explores the application of artificial intelligence (AI) techniques to model and predict the properties of HSRC. In the past two decades, the production levels in various industries and, consequently, the amount of waste have increased significantly. Continuing this trend will undoubtedly cause irreparable damage to the environment. For this reason, engineers have been constantly seeking practical solutions for recycling industrial waste in recent years. This research utilized the results of the compressive strength of 90-day high-strength recycled concrete. The method for creating recycled concrete involved replacing sand with crushed glass and using glass powder instead of cement. Subsequently, a feedforward artificial neural network was employed to model the compressive strength results for 90 days. The regression and error values obtained indicate that this network is suitable for modeling the compressive strength data.

Keywords: high-strength recycled concrete, feedforward artificial neural network, regression, construction materials

Procedia PDF Downloads 17
3230 A Contemporary Advertising Strategy on Social Networking Sites

Authors: M. S. Aparna, Pushparaj Shetty D.

Abstract:

Nowadays social networking sites have become so popular that the producers or the sellers look for these sites as one of the best options to target the right audience to market their products. There are several tools available to monitor or analyze the social networks. Our task is to identify the right community web pages and find out the behavior analysis of the members by using these tools and formulate an appropriate strategy to market the products or services to achieve the set goals. The advertising becomes more effective when the information of the product/ services come from a known source. The strategy explores great buying influence in the audience on referral marketing. Our methodology proceeds with critical budget analysis and promotes viral influence propagation. In this context, we encompass the vital bits of budget evaluation such as the number of optimal seed nodes or primary influential users activated onset, an estimate coverage spread of nodes and maximum influence propagating distance from an initial seed to an end node. Our proposal for Buyer Prediction mathematical model arises from the urge to perform complex analysis when the probability density estimates of reliable factors are not known or difficult to calculate. Order Statistics and Buyer Prediction mapping function guarantee the selection of optimal influential users at each level. We exercise an efficient tactics of practicing community pages and user behavior to determine the product enthusiasts on social networks. Our approach is promising and should be an elementary choice when there is little or no prior knowledge on the distribution of potential buyers on social networks. In this strategy, product news propagates to influential users on or surrounding networks. By applying the same technique, a user can search friends who are capable to advise better or give referrals, if a product interests him.

Keywords: viral marketing, social network analysis, community web pages, buyer prediction, influence propagation, budget constraints

Procedia PDF Downloads 263
3229 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 148
3228 A Hybrid MAC Protocol for Delay Constrained Mobile Wireless Sensor Networks

Authors: Hanefi Cinar, Musa Cibuk, Ismail Erturk, Fikri Aggun, Munip Geylani

Abstract:

Mobile Wireless Sensor Networks (MWSNs) carry heterogeneous data traffic with different urgency and quality of service (QoS) requirements. There are a lot of studies made on energy efficiency, bandwidth, and communication methods in literature. But delay, high throughput, utility parameters are not well considered. Increasing demand for real-time data transfer makes these parameters more important. In this paper we design new MAC protocol which is delay constrained and targets for improving delay, utility, and throughput performance of the network and finding solutions on collision and interference problems. Protocol improving QoS requirements by using TDMA, FDM, and OFDMA hybrid communication methods with multi-channel communication.

Keywords: MWSN, delay, hybrid MAC, TDMA, FDM, OFDMA

Procedia PDF Downloads 481
3227 Computational Model for Predicting Effective siRNA Sequences Using Whole Stacking Energy (ΔG) for Gene Silencing

Authors: Reena Murali, David Peter S.

Abstract:

The small interfering RNA (siRNA) alters the regulatory role of mRNA during gene expression by translational inhibition. Recent studies shows that up regulation of mRNA cause serious diseases like Cancer. So designing effective siRNA with good knockdown effects play an important role in gene silencing. Various siRNA design tools had been developed earlier. In this work, we are trying to analyze the existing good scoring second generation siRNA predicting tools and to optimize the efficiency of siRNA prediction by designing a computational model using Artificial Neural Network and whole stacking energy (ΔG), which may help in gene silencing and drug design in cancer therapy. Our model is trained and tested against a large data set of siRNA sequences. Validation of our results is done by finding correlation coefficient of experimental versus observed inhibition efficacy of siRNA. We achieved a correlation coefficient of 0.727 in our previous computational model and we could improve the correlation coefficient up to 0.753 when the threshold of whole tacking energy is greater than or equal to -32.5 kcal/mol.

Keywords: artificial neural network, double stranded RNA, RNA interference, short interfering RNA

Procedia PDF Downloads 526
3226 Depth of Penetration and Nature of Interferential Current in Cutaneous, Subcutaneous and Muscle Tissues

Authors: A. Beatti, L. Chipchase, A. Rayner, T. Souvlis

Abstract:

The aims of this study were to investigate the depth of interferential current (IFC) penetration through soft tissue and to investigate the area over which IFC spreads during clinical application. Premodulated IFC and ‘true’ IFC at beat frequencies of 4, 40 and 90Hz were applied via four electrodes to the distal medial thigh of 15 healthy subjects. The current was measured via three Teflon coated fine needle electrodes that were inserted into the superficial layer of skin, then into the subcutaneous tissue (≈1 cm deep) and then into muscle tissue (≈2 cm deep). The needle electrodes were placed in the middle of the four IFC electrodes, between two channels and outside the four electrodes. Readings were taken at each tissue depth from each electrode during each treatment frequency then digitized and stored for analysis. All voltages were greater at all depths and locations than baseline (p < 0.01) and voltages decreased with depth (P=0.039). Lower voltages of all currents were recorded in the middle of the four electrodes with the highest voltage being recorded outside the four electrodes in all depths (P=0.000).For each frequency of ‘true’ IFC, the voltage was higher in the superficial layer outside the electrodes (P ≤ 0.01).Premodulated had higher voltages along the line of one circuit (P ≤ 0.01). Clinically, IFC appears to pass through skin layers to depth and is more efficient than premodulated IFC when targeting muscle tissue.

Keywords: electrotherapy, interferential current, interferential therapy, medium frequency current

Procedia PDF Downloads 348
3225 Task Validity in Neuroimaging Studies: Perspectives from Applied Linguistics

Authors: L. Freeborn

Abstract:

Recent years have seen an increasing number of neuroimaging studies related to language learning as imaging techniques such as fMRI and EEG have become more widely accessible to researchers. By using a variety of structural and functional neuroimaging techniques, these studies have already made considerable progress in terms of our understanding of neural networks and processing related to first and second language acquisition. However, the methodological designs employed in neuroimaging studies to test language learning have been questioned by applied linguists working within the field of second language acquisition (SLA). One of the major criticisms is that tasks designed to measure language learning gains rarely have a communicative function, and seldom assess learners’ ability to use the language in authentic situations. This brings the validity of many neuroimaging tasks into question. The fundamental reason why people learn a language is to communicate, and it is well-known that both first and second language proficiency are developed through meaningful social interaction. With this in mind, the SLA field is in agreement that second language acquisition and proficiency should be measured through learners’ ability to communicate in authentic real-life situations. Whilst authenticity is not always possible to achieve in a classroom environment, the importance of task authenticity should be reflected in the design of language assessments, teaching materials, and curricula. Tasks that bear little relation to how language is used in real-life situations can be considered to lack construct validity. This paper first describes the typical tasks used in neuroimaging studies to measure language gains and proficiency, then analyses to what extent these tasks can validly assess these constructs.

Keywords: neuroimaging studies, research design, second language acquisition, task validity

Procedia PDF Downloads 141
3224 Random Access in IoT Using Naïve Bayes Classification

Authors: Alhusein Almahjoub, Dongyu Qiu

Abstract:

This paper deals with the random access procedure in next-generation networks and presents the solution to reduce total service time (TST) which is one of the most important performance metrics in current and future internet of things (IoT) based networks. The proposed solution focuses on the calculation of optimal transmission probability which maximizes the success probability and reduces TST. It uses the information of several idle preambles in every time slot, and based on it, it estimates the number of backlogged IoT devices using Naïve Bayes estimation which is a type of supervised learning in the machine learning domain. The estimation of backlogged devices is necessary since optimal transmission probability depends on it and the eNodeB does not have information about it. The simulations are carried out in MATLAB which verify that the proposed solution gives excellent performance.

Keywords: random access, LTE/LTE-A, 5G, machine learning, Naïve Bayes estimation

Procedia PDF Downloads 146
3223 An Ecological Grandeur: Environmental Ethics in Buddhist Perspective

Authors: Merina Islam

Abstract:

There are many environmental problems. Various counter measures have been taken for environmental problems. Philosophy is an important contributor to environmental studies as it takes deep interest in meaning analysis of the concept environment and other related concepts. The Buddhist frame, which is virtue ethical, remains a better alternative to the traditional environmental outlook. Granting the unique role of man in immoral deliberations, the Buddhist approach, however, maintains a holistic concept of ecological harmony. Buddhist environmental ethics is more concerned about the complete moral community, the total ecosystem, than any particular species within the community. The moral reorientation proposed here has resemblance to the concept of 'deep ecology. Given the present day prominence of virtue ethics, we need to explore further into the Buddhist virtue theory, so that a better framework to treat the natural world would be ensured. Environment has turned out to be one of the most widely discussed issues in the recent times. Buddhist concepts such as Pratityasamutpadavada, Samvrit Satya, Paramartha Satya, Shunyata, Sanghatvada, Bodhisattva, Santanvada and others deal with interdependence in terms of both internal as well external ecology. The internal ecology aims at mental well-being whereas external ecology deals with physical well-being. The fundamental Buddhist concepts for dealing with environmental Problems are where the environment has the same value as humans as from the two Buddhist doctrines of the Non-duality of Life and its Environment and the Origination in Dependence; and the inevitability of overcoming environmental problems through the practice of the way of the Bodhisattva, because environmental problems are evil for people and nature. Buddhism establishes that there is a relationship among all the constituents of the world. There is nothing in the world which is independent from any other thing. Everything is dependent on others. The realization that everything in the universe is mutually interdependent also shows that the man cannot keep itself unaffected from ecology. This paper would like to focus how the Buddhist’s identification of nature and the Dhamma can contribute toward transforming our understanding, attitudes, and actions regarding the care of the earth. Environmental Ethics in Buddhism presents a logical and thorough examination of the metaphysical and ethical dimensions of early Buddhist literature. From the Buddhist viewpoint, humans are not in a category that is distinct and separate from other sentient beings, nor are they intrinsically superior. All sentient beings are considered to have the Buddha-nature, that is, the potential to become fully enlightened. Buddhists do not believe in treating of non-human sentient beings as objects for human consumption. The significance of Buddhist theory of interdependence can be understood from the fact that it shows that one’s happiness or suffering originates from ones realization or non-realization respectively of the dependent nature of everything. It is obvious, even without emphasis, which in the context of deep ecological crisis of today there is a need to infuse the consciousness of interdependence.

Keywords: Buddhism, deep ecology, environmental problems, Pratityasamutpadavada

Procedia PDF Downloads 316
3222 Wavelength Conversion of Dispersion Managed Solitons at 100 Gbps through Semiconductor Optical Amplifier

Authors: Kadam Bhambri, Neena Gupta

Abstract:

All optical wavelength conversion is essential in present day optical networks for transparent interoperability, contention resolution, and wavelength routing. The incorporation of all optical wavelength convertors leads to better utilization of the network resources and hence improves the efficiency of optical networks. Wavelength convertors that can work with Dispersion Managed (DM) solitons are attractive due to their superior transmission capabilities. In this paper, wavelength conversion for dispersion managed soliton signals was demonstrated at 100 Gbps through semiconductor optical amplifier and an optical filter. The wavelength conversion was achieved for a 1550 nm input signal to1555nm output signal. The output signal was measured in terms of BER, Q factor and system margin.    

Keywords: all optical wavelength conversion, dispersion managed solitons, semiconductor optical amplifier, cross gain modultation

Procedia PDF Downloads 455
3221 Analyzing and Predicting the CL-20 Detonation Reaction Mechanism Based on Artificial Intelligence Algorithm

Authors: Kaining Zhang, Lang Chen, Danyang Liu, Jianying Lu, Kun Yang, Junying Wu

Abstract:

In order to solve the problem of a large amount of simulation and limited simulation scale in the first-principle molecular dynamics simulation of energetic material detonation reaction, we established an artificial intelligence model for analyzing and predicting the detonation reaction mechanism of CL-20 based on the first-principle molecular dynamics simulation of the multiscale shock technique (MSST). We employed principal component analysis to identify the dominant charge features governing molecular reactions. We adopted the K-means clustering algorithm to cluster the reaction paths and screen out the key reactions. We introduced the neural network algorithm to construct the mapping relationship between the charge characteristics of the molecular structure and the key reaction characteristics so as to establish a calculation method for predicting detonation reactions based on the charge characteristics of CL-20 and realize the rapid analysis of the reaction mechanism of energetic materials.

Keywords: energetic material detonation reaction, first-principle molecular dynamics simulation of multiscale shock technique, neural network, CL-20

Procedia PDF Downloads 116