Search results for: deep belief net
2213 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks
Authors: Bahareh Golchin, Nooshin Riahi
Abstract:
One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.Keywords: emotion classification, sentiment analysis, social networks, deep neural networks
Procedia PDF Downloads 1372212 Analysis of Surface Hardness, Surface Roughness and near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process
Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.
Abstract:
In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor Hobson Talysurf tester, micro Vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer.Keywords: hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness
Procedia PDF Downloads 4202211 Robust Barcode Detection with Synthetic-to-Real Data Augmentation
Authors: Xiaoyan Dai, Hsieh Yisan
Abstract:
Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.Keywords: barcode detection, data augmentation, deep learning, image-based processing
Procedia PDF Downloads 1682210 Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models
Authors: Aditya Karade, Sharada Falane, Dhananjay Deshmukh, Vijaykumar Mantri
Abstract:
Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%.Keywords: brain tumour, MRI image, detecting and classifying tumour, pre-trained models, transfer learning, image segmentation, data augmentation
Procedia PDF Downloads 742209 Bridging the Internalist-Externalist Divide: A Catholic-Reformed Epistemological Synthesis of the Justification of Christian Beliefs
Authors: Linto Francis Kallukulangara
Abstract:
Although the Catholic and Reformed traditions share a common baptismal heritage, they differe considerably in their epistemic stance on whether a believer can legitimatly subscribe to a proposition of Christian Revelation without any evidence. Catholic tradition, which is essentially rooted in the internalist epistemology, posits that a theistic belief must be substantiated by a rational ground that is cognitively accessible to the believer. In contrast, Reformed thinkers have historically maintained a non-evidentialist stance, which has received strong criticism, including allegations of irrationality. However, recent developments in analytic philosophy, particularly the rise of externalist epistemology, have revitalized the non-evidentialist position within the Reformed tradition. The intellectual allure of this movement has led many contemporary thinkers to argue that the Catholic internalist/evidentialist position has not only been significantly challenged but has also been largely silenced by this externalism-based Reformed epistemological stance. Consequently, they argue that the non-cogntive Reformed currect has established itself as the dominant, or perhaps the only, epistemological position in the philosophy of religion. This paper counters the prevailing narrative, arguing that despite the ostensible challenge posed by Reformed non-evidentialism, a synthesis is possible. By analyzing various Reformed epistemological movements within the contemporary anaytic tradtion, we demonstrate that externalist-based Reformed epistemology does not fundamentally undermine Catholic evidentialism. Instead, it offers a new and more promising framework for a Christian epistemology that sythesizes elements from both traditions, offering a more comprehensive and nuanced understing of the justification of religion belief, incoperating both internalist and externalist perspective.Keywords: reformed and catholic epistemology, evidentialism, non-evidentialism, internalism, externalism
Procedia PDF Downloads 232208 Deployment of Attack Helicopters in Conventional Warfare: The Gulf War
Authors: Mehmet Karabekir
Abstract:
Attack helicopters (AHs) are usually deployed in conventional warfare to destroy armored and mechanized forces of enemy. In addition, AHs are able to perform various tasks in the deep, and close operations – intelligence, surveillance, reconnaissance, air assault operations, and search and rescue operations. Apache helicopters were properly employed in the Gulf Wars and contributed the success of campaign by destroying a large number of armored and mechanized vehicles of Iraq Army. The purpose of this article is to discuss the deployment of AHs in conventional warfare in the light of Gulf Wars. First, the employment of AHs in deep and close operations will be addressed regarding the doctrine. Second, the US armed forces AH-64 doctrinal and tactical usage will be argued in the 1st and 2nd Gulf Wars.Keywords: attack helicopter, conventional warfare, gulf wars
Procedia PDF Downloads 4732207 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms
Authors: Selim M. Khan
Abstract:
Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America
Procedia PDF Downloads 962206 Deepfake Detection for Compressed Media
Authors: Sushil Kumar Gupta, Atharva Joshi, Ayush Sonawale, Sachin Naik, Rajshree Khande
Abstract:
The usage of artificially created videos and audio by deep learning is a major problem of the current media landscape, as it pursues the goal of misinformation and distrust. In conclusion, the objective of this work targets generating a reliable deepfake detection model using deep learning that will help detect forged videos accurately. In this work, CelebDF v1, one of the largest deepfake benchmark datasets in the literature, is adopted to train and test the proposed models. The data includes authentic and synthetic videos of high quality, therefore allowing an assessment of the model’s performance against realistic distortions.Keywords: deepfake detection, CelebDF v1, convolutional neural network (CNN), xception model, data augmentation, media manipulation
Procedia PDF Downloads 92205 Optimizing Bridge Deck Construction: A Deep Neural Network Approach for Limiting Exterior Grider Rotation
Authors: Li Hui, Riyadh Hindi
Abstract:
In the United States, bridge construction often employs overhang brackets to support the deck overhang, the weight of fresh concrete, and loads from construction equipment. This approach, however, can lead to significant torsional moments on the exterior girders, potentially causing excessive girder rotation. Such rotations can result in various safety and maintenance issues, including thinning of the deck, reduced concrete cover, and cracking during service. Traditionally, these issues are addressed by installing temporary lateral bracing systems and conducting comprehensive torsional analysis through detailed finite element analysis for the construction of bridge deck overhang. However, this process is often intricate and time-intensive, with the spacing between temporary lateral bracing systems usually relying on the field engineers’ expertise. In this study, a deep neural network model is introduced to limit exterior girder rotation during bridge deck construction. The model predicts the optimal spacing between temporary bracing systems. To train this model, over 10,000 finite element models were generated in SAP2000, incorporating varying parameters such as girder dimensions, span length, and types and spacing of lateral bracing systems. The findings demonstrate that the deep neural network provides an effective and efficient alternative for limiting the exterior girder rotation for bridge deck construction. By reducing dependence on extensive finite element analyses, this approach stands out as a significant advancement in improving safety and maintenance effectiveness in the construction of bridge decks.Keywords: bridge deck construction, exterior girder rotation, deep learning, finite element analysis
Procedia PDF Downloads 622204 Phytoplankton Community Structure in the Moroccan Coast of the Mediterranean Sea: Case Study of Saiidia, Three Forks Cape
Authors: H. Idmoussi, L. Somoue, O. Ettahiri, A. Makaoui, S. Charib, A. Agouzouk, A. Ben Mhamed, K. Hilmi, A. Errhif
Abstract:
The study on the composition, abundance, and distribution of phytoplankton was conducted along the Moroccan coast of the Mediterranean Sea (Saiidia - Three Forks Cape) in April 2018. Samples were collected at thirteen stations using Niskin bottles within two layers (surface and deep layers). The identification and enumeration of phytoplankton were carried out according to the Utermöhl method (1958). A total number of 54 phytoplankton species were identified over the entire survey area. Thirty-six species could be found both in the surface and the deep layers while eleven species were observed only in the surface layer and seven in the deep layer. The phytoplankton throughout the study area was dominated by diatoms represented mainly by Nitzschia sp., Pseudonitzschia sp., Chaetoceros sp., Cylindrotheca closterium, Leptocylindrus minimus, Leptocylindrus danicus, Dactyliosolen fragilissimus. Dinoflagellates were dominated by Gymnodinium sp., Scrippsiella sp., Gyrodinium spirale, Noctulica sp, Prorocentrum micans. Euglenophyceae, Silicoflagellates and Raphidophyceae were present in low numbers. Most of the phytoplankton were concentrated in the surface layer, particularly towards the Three Forks Cape (25200 cells·l⁻¹). Shannon species diversity (ranging from 2 and 4 Bits) and evenness index (broadly > 0.7) suggested that phytoplankton community is generally diversified and structured in the studied area.Keywords: abundance, diversity, Mediterranean Sea, phytoplankton
Procedia PDF Downloads 1582203 Forward Conditional Restricted Boltzmann Machines for the Generation of Music
Authors: Johan Loeckx, Joeri Bultheel
Abstract:
Recently, the application of deep learning to music has gained popularity. Its true potential, however, has been largely unexplored. In this paper, a new idea for representing the dynamic behavior of music is proposed. A ”forward” conditional RBM takes into account not only preceding but also future samples during training. Though this may sound controversial at first sight, it will be shown that it makes sense from a musical and neuro-cognitive perspective. The model is applied to reconstruct music based upon the first notes and to improvise in the musical style of a composer. Different to expectations, reconstruction accuracy with respect to a regular CRBM with the same order, was not significantly improved. More research is needed to test the performance on unseen data.Keywords: deep learning, restricted boltzmann machine, music generation, conditional restricted boltzmann machine (CRBM)
Procedia PDF Downloads 5222202 Reflective Thinking and Experiential Learning – A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities, Greater Integration of Student Profiles
Authors: Paulo Sérgio Ribeiro de Araújo Bogas
Abstract:
Although several studies have assumed (at least implicitly) that learners' approaches to learning develop into deeper approaches to higher education, there appears to be no clear theoretical basis for this assumption and no empirical evidence. As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation, and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences result from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the student's responses can be described as students who reinforce the initial deep approach, students who maintain the initial deep approach level, and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to the possible adoption of deep approaches to learning since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself, and, on the other hand, the additional effort that this practice required for some of the students.Keywords: experiential learning, higher education, mixed methods, reflective learning, marketing
Procedia PDF Downloads 832201 Quantification and Thermal Behavior of Rice Bran Oil, Sunflower Oil and Their Model Blends
Authors: Harish Kumar Sharma, Garima Sengar
Abstract:
Rice bran oil is considered comparatively nutritionally superior than different fats/oils. Therefore, model blends prepared from pure rice bran oil (RBO) and sunflower oil (SFO) were explored for changes in the different physicochemical parameters. Repeated deep fat frying process was carried out by using dried potato in order to study the thermal behaviour of pure rice bran oil, sunflower oil and their model blends. Pure rice bran oil and sunflower oil had shown good thermal stability during the repeated deep fat frying cycles. Although, the model blends constituting 60% RBO + 40% SFO showed better suitability during repeated deep fat frying than the remaining blended oils. The quantification of pure rice bran oil in the blended oils, physically refined rice bran oil (PRBO): SnF (sunflower oil) was carried by different methods. The study revealed that regression equations based on the oryzanol content, palmitic acid composition and iodine value can be used for the quantification. The rice bran oil can easily be quantified in the blended oils based on the oryzanol content by HPLC even at 1% level. The palmitic acid content in blended oils can also be used as an indicator to quantify rice bran oil at or above 20% level in blended oils whereas the method based on ultrasonic velocity, acoustic impedance and relative association showed initial promise in the quantification.Keywords: rice bran oil, sunflower oil, frying, quantification
Procedia PDF Downloads 3082200 An Anthropological Insight into Cultural Beliefs, Perceptions and Taboos Associated with Reproductive Tract Infections among Women of Village Junga Village, Himachal Pradesh, India
Authors: A. Ratika Thakur, B. A. K. Sinha , C. R. K. Pathak
Abstract:
Reproductive Tract Infections are recognized as a serious global health problem with direct impact on women. In the developing countries, prevalence of RTI is much higher relative to other health problems. Women of the reproductive age group are socially, mentally and physically more vulnerable to infections. Also, it is a well established fact that RTI has prolonged complications in women rather than men. It causes ectopic pregnancy, pelvic inflammatory diseases, miscarriage and infertility in the long course. Women perspective about infections is less studied. In this view the study was carried out with an aim to determine knowledge, perception and belief of married women towards reproductive tract infection. The study was conducted in Junga village, District Shimla, Himachal Pradesh, India. 48 women were interviewed regarding awareness, beliefs and taboos related to reproductive tract infection. Other aspects like fertility history were also taken into account. The data were collected using interviews with the help of interview schedule and interview guide. Data were recorded in the form of narratives and case studies. The analysis was done using quantitative and qualitative analysis. It was found that a majority of women were not aware about the reasons of infection. Moreover cultural beliefs, perceptions and taboos made them more vulnerable and exposed to RTI. Economic dependency upon men, lack of control in barrier methods were some of the factors that contributed to delayed treatment of women. It was found that a majority of women suffering from RTIs were silently bearing the burden and underwent treatment when the case would not rest in their hands.Keywords: belief, infection, perception, taboo, women
Procedia PDF Downloads 3762199 Confess Your Sins to One Another: An Exploration of the Biblical Validity and the Psychological Efficacy of the Sacrament of Reconciliation in the Catholic Church
Authors: M. B. Peter
Abstract:
The Sacrament of Penance and Reconciliation has long been upheld, by the Catholic Church, as one of the Sacraments of healing, mainly due to the sense of peace, tranquility and psychological quiescence it accords the penitent upon receiving Sacramental absolution of sin through the action of the priest. This paper explores the Sacramental character of this practice and the psychological benefits of the celebration of the Sacrament. This is achieved in two parts: firstly, by the intellectual engagement of Sacred Scripture and the consolidated Sacred Tradition that the Catholic magisterium protects and, secondly, via a broad survey of the works of Carl Gustav Jung and Orval Hobart Mowrer regarding confession and forgiveness. The former will serve to demonstrate the Catholic belief of the divine institution of the Sacrament whilst the latter will demonstrate how this belief, coupled with the existing benefit of confessing guilt, collectively bolsters the Sacrament’s overall psychological efficacy. Fundamentally, the analysis of Jung and Mowrer’s works demonstrate that man, as a naturally religious being, has an inherent need for the confession of his wrong that he might be alleviated of psychological guilt in obtaining forgiveness of a (divinely ordained) minister who is sanctioned to absolve, i.e. the priest. The paper also presents the curative effect of the celebration of this Sacrament, illustrating how, without the act of confession, man remains in moral isolation from God and man; and, that with it, man is relieved of the mysterious feeling of guilt which lies at the root of his disquiet of mind and disturbance of will. Thus, the paper penultimately establishes how the Sacrament of Reconciliation is positioned in that place where psychology and theology meet: man’s sense of guilt. It is Jung’s views on confession and forgiveness that ultimately bridge the chasm between psychology and Christianity.Keywords: Catholic, confession, Jung, Mowrer, penance, psychology, Sacrament of Reconciliation
Procedia PDF Downloads 2762198 Deep Learning-Based Liver 3D Slicer for Image-Guided Therapy: Segmentation and Needle Aspiration
Authors: Ahmedou Moulaye Idriss, Tfeil Yahya, Tamas Ungi, Gabor Fichtinger
Abstract:
Image-guided therapy (IGT) plays a crucial role in minimally invasive procedures for liver interventions. Accurate segmentation of the liver and precise needle placement is essential for successful interventions such as needle aspiration. In this study, we propose a deep learning-based liver 3D slicer designed to enhance segmentation accuracy and facilitate needle aspiration procedures. The developed 3D slicer leverages state-of-the-art convolutional neural networks (CNNs) for automatic liver segmentation in medical images. The CNN model is trained on a diverse dataset of liver images obtained from various imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI). The trained model demonstrates robust performance in accurately delineating liver boundaries, even in cases with anatomical variations and pathological conditions. Furthermore, the 3D slicer integrates advanced image registration techniques to ensure accurate alignment of preoperative images with real-time interventional imaging. This alignment enhances the precision of needle placement during aspiration procedures, minimizing the risk of complications and improving overall intervention outcomes. To validate the efficacy of the proposed deep learning-based 3D slicer, a comprehensive evaluation is conducted using a dataset of clinical cases. Quantitative metrics, including the Dice similarity coefficient and Hausdorff distance, are employed to assess the accuracy of liver segmentation. Additionally, the performance of the 3D slicer in guiding needle aspiration procedures is evaluated through simulated and clinical interventions. Preliminary results demonstrate the effectiveness of the developed 3D slicer in achieving accurate liver segmentation and guiding needle aspiration procedures with high precision. The integration of deep learning techniques into the IGT workflow shows great promise for enhancing the efficiency and safety of liver interventions, ultimately contributing to improved patient outcomes.Keywords: deep learning, liver segmentation, 3D slicer, image guided therapy, needle aspiration
Procedia PDF Downloads 482197 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification
Authors: Oumaima Khlifati, Khadija Baba
Abstract:
Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.Keywords: distress pavement, hyperparameters, automatic classification, deep learning
Procedia PDF Downloads 932196 Performance Evaluation and Plugging Characteristics of Controllable Self-Aggregating Colloidal Particle Profile Control Agent
Authors: Zhiguo Yang, Xiangan Yue, Minglu Shao, Yue Yang, Rongjie Yan
Abstract:
It is difficult to realize deep profile control because of the small pore-throats and easy water channeling in low-permeability heterogeneous reservoir, and the traditional polymer microspheres have the contradiction between injection and plugging. In order to solve this contradiction, the controllable self-aggregating colloidal particles (CSA) containing amide groups on the surface of microspheres was prepared based on emulsion polymerization of styrene and acrylamide. The dispersed solution of CSA colloidal particles, whose particle size is much smaller than the diameter of pore-throats, was injected into the reservoir. When the microspheres migrated to the deep part of reservoir, , these CSA colloidal particles could automatically self-aggregate into large particle clusters under the action of the shielding agent and the control agent, so as to realize the plugging of the water channels. In this paper, the morphology, temperature resistance and self-aggregation properties of CSA microspheres were studied by transmission electron microscopy (TEM) and bottle test. The results showed that CSA microspheres exhibited heterogeneous core-shell structure, good dispersion, and outstanding thermal stability. The microspheres remain regular and uniform spheres at 100℃ after aging for 35 days. With the increase of the concentration of the cations, the self-aggregation time of CSA was gradually shortened, and the influence of bivalent cations was greater than that of monovalent cations. Core flooding experiments showed that CSA polymer microspheres have good injection properties, CSA particle clusters can effective plug the water channels and migrate to the deep part of the reservoir for profile control.Keywords: heterogeneous reservoir, deep profile control, emulsion polymerization, colloidal particles, plugging characteristic
Procedia PDF Downloads 2412195 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 1782194 Classification of Generative Adversarial Network Generated Multivariate Time Series Data Featuring Transformer-Based Deep Learning Architecture
Authors: Thrivikraman Aswathi, S. Advaith
Abstract:
As there can be cases where the use of real data is somehow limited, such as when it is hard to get access to a large volume of real data, we need to go for synthetic data generation. This produces high-quality synthetic data while maintaining the statistical properties of a specific dataset. In the present work, a generative adversarial network (GAN) is trained to produce multivariate time series (MTS) data since the MTS is now being gathered more often in various real-world systems. Furthermore, the GAN-generated MTS data is fed into a transformer-based deep learning architecture that carries out the data categorization into predefined classes. Further, the model is evaluated across various distinct domains by generating corresponding MTS data.Keywords: GAN, transformer, classification, multivariate time series
Procedia PDF Downloads 1302193 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 1832192 Application of Self-Efficacy Theory in Counseling Deaf and Hard of Hearing Students
Authors: Nancy A. Delich, Stephen D. Roberts
Abstract:
This case study explores using self-efficacy theory in counseling deaf and hard of hearing students in one California school district. Self-efficacy is described as the confidence a student has for performing a set of skills required to succeed at a specific task. When students need to learn a skill, self-efficacy can be a major factor in influencing behavioral change. Self-efficacy is domain specific, meaning that students can have high confidence in their abilities to accomplish a task in one domain, while at the same time having low confidence in their abilities to accomplish another task in a different domain. The communication isolation experienced by deaf and hard of hearing children and adolescents can negatively impact their belief about their ability to navigate life challenges. There is a need to address issues that impact deaf and hard of hearing students’ social-emotional development. Failure to address these needs may result in depression, suicidal ideation, and anxiety among other mental health concerns. Self-efficacy training can be used to address these socio-emotional developmental issues with this population. Four sources of experiences are applied during an intervention: (a) enactive mastery experience, (b) vicarious experience, (c) verbal persuasion, and (d) physiological and affective states. This case study describes the use of self-efficacy training with a coed group of 12 deaf and hard of hearing high school students who experienced bullying at school. Beginning with enactive mastery experience, the counselor introduced the topic of bullying to the group. The counselor educated the students about the different types of bullying while teaching them the terminology, signs and their meanings. The most effective way to increase self-efficacy is through extensive practice. To better understand these concepts, the students practiced through role-playing with the goal of developing self-advocacy skills. Vicarious experience is the perception that students have about their capabilities. Viewing other students advocating for themselves, cognitively rehearsing what actions they will and will not take, and teaching each other how to stand up against bullying can strengthen their belief in successfully overcoming bullying. The third source of self-efficacy beliefs is verbal persuasion. It occurs when others express belief in the capabilities of the student. Didactic training and pedagogic materials on bullying were employed as part of the group counseling sessions. The fourth source of self-efficacy appraisals is physiological and affective states. Students expect positive emotions to be associated with successful skilled performance. When students practice new skills, the counselor can apply several strategies to enhance self-efficacy while reducing and controlling emotional and physical states. The intervention plan incorporated all four sources of self-efficacy training during several interactive group sessions regarding bullying. There was an increased understanding around the issues of bullying, resulting in the students’ belief of their ability to perform protective behaviors and deter future occurrences. The outcome of the intervention plan resulted in a reduction of reported bullying incidents. In conclusion, self-efficacy training can be an effective counseling and teaching strategy in addressing and enhancing the social-emotional functioning with deaf and hard of hearing adolescents.Keywords: counseling, self-efficacy, bullying, social-emotional development, mental health, deaf and hard of hearing students
Procedia PDF Downloads 3522191 Signal Integrity Performance Analysis in Capacitive and Inductively Coupled Very Large Scale Integration Interconnect Models
Authors: Mudavath Raju, Bhaskar Gugulothu, B. Rajendra Naik
Abstract:
The rapid advances in Very Large Scale Integration (VLSI) technology has resulted in the reduction of minimum feature size to sub-quarter microns and switching time in tens of picoseconds or even less. As a result, the degradation of high-speed digital circuits due to signal integrity issues such as coupling effects, clock feedthrough, crosstalk noise and delay uncertainty noise. Crosstalk noise in VLSI interconnects is a major concern and reduction in VLSI interconnect has become more important for high-speed digital circuits. It is the most effectively considered in Deep Sub Micron (DSM) and Ultra Deep Sub Micron (UDSM) technology. Increasing spacing in-between aggressor and victim line is one of the technique to reduce the crosstalk. Guard trace or shield insertion in-between aggressor and victim is also one of the prominent options for the minimization of crosstalk. In this paper, far end crosstalk noise is estimated with mutual inductance and capacitance RLC interconnect model. Also investigated the extent of crosstalk in capacitive and inductively coupled interconnects to minimizes the same through shield insertion technique.Keywords: VLSI, interconnects, signal integrity, crosstalk, shield insertion, guard trace, deep sub micron
Procedia PDF Downloads 1852190 Hydrothermal Energy Application Technology Using Dam Deep Water
Authors: Yooseo Pang, Jongwoong Choi, Yong Cho, Yongchae Jeong
Abstract:
Climate crisis, such as environmental problems related to energy supply, is getting emerged issues, so the use of renewable energy is essentially required to solve these problems, which are mainly managed by the Paris Agreement, the international treaty on climate change. The government of the Republic of Korea announced that the key long-term goal for a low-carbon strategy is “Carbon neutrality by 2050”. It is focused on the role of the internet data centers (IDC) in which large amounts of data, such as artificial intelligence (AI) and big data as an impact of the 4th industrial revolution, are managed. The demand for the cooling system market for IDC was about 9 billion US dollars in 2020, and 15.6% growth a year is expected in Korea. It is important to control the temperature in IDC with an efficient air conditioning system, so hydrothermal energy is one of the best options for saving energy in the cooling system. In order to save energy and optimize the operating conditions, it has been considered to apply ‘the dam deep water air conditioning system. Deep water at a specific level from the dam can supply constant water temperature year-round. It will be tested & analyzed the amount of energy saving with a pilot plant that has 100RT cooling capacity. Also, a target of this project is 1.2 PUE (Power Usage Effectiveness) which is the key parameter to check the efficiency of the cooling system.Keywords: hydrothermal energy, HVAC, internet data center, free-cooling
Procedia PDF Downloads 812189 Deep Groundwater Potential and Chemical Analysis Based on Well Logging Analysis at Kapuk-Cengkareng, West Jakarta, DKI Jakarta, Indonesia
Authors: Josua Sihotang
Abstract:
Jakarta Capital Special Region is the province that densely populated with rapidly growing infrastructure but less attention for the environmental condition. This makes some social problem happened like lack of clean water supply. Shallow groundwater and river water condition that has contaminated make the layer of deep water carrier (aquifer) should be done. This research aims to provide the people insight about deep groundwater potential and to determine the depth, location, and quality where the aquifer can be found in Jakarta’s area, particularly Kapuk-Cengkareng’s people. This research was conducted by geophysical method namely Well Logging Analysis. Well Logging is the geophysical method to know the subsurface lithology with the physical characteristic. The observation in this research area was conducted with several well devices that is Spontaneous Potential Log (SP Log), Resistivity Log, and Gamma Ray Log (GR Log). The first devices well is SP log which is work by comprising the electrical potential difference between the electrodes on the surface with the electrodes that is contained in the borehole and rock formations. The second is Resistivity Log, used to determine both the hydrocarbon and water zone based on their porosity and permeability properties. The last is GR Log, work by identifying radioactivity levels of rocks which is containing elements of thorium, uranium, or potassium. The observation result is curve-shaped which describes the type of lithological coating in subsurface. The result from the research can be interpreted that there are four of the deep groundwater layer zone with different quality. The good groundwater layer can be found in layers with good porosity and permeability. By analyzing the curves, it can be known that most of the layers which were found in this wellbore are clay stone with low resistivity and high gamma radiation. The resistivity value of the clay stone layers is about 2-4 ohm-meter with 65-80 Cps gamma radiation. There are several layers with high resistivity value and low gamma radiation (sand stone) that can be potential for being an aquifer. This is reinforced by the sand layer with a right-leaning SP log curve proving that this layer is permeable. These layers have 4-9 ohm-meter resistivity value with 40-65 Cps gamma radiation. These are mostly found as fresh water aquifer.Keywords: aquifer, deep groundwater potential, well devices, well logging analysis
Procedia PDF Downloads 2522188 Classification of Cochannel Signals Using Cyclostationary Signal Processing and Deep Learning
Authors: Bryan Crompton, Daniel Giger, Tanay Mehta, Apurva Mody
Abstract:
The task of classifying radio frequency (RF) signals has seen recent success in employing deep neural network models. In this work, we present a combined signal processing and machine learning approach to signal classification for cochannel anomalous signals. The power spectral density and cyclostationary signal processing features of a captured signal are computed and fed into a neural net to produce a classification decision. Our combined signal preprocessing and machine learning approach allows for simpler neural networks with fast training times and small computational resource requirements for inference with longer preprocessing time.Keywords: signal processing, machine learning, cyclostationary signal processing, signal classification
Procedia PDF Downloads 1072187 Deep Eutectic Solvent/ Polyimide Blended Membranes for Anaerobic Digestion Gas Separation
Authors: Glemarie C. Hermosa, Sheng-Jie You, Chien Chih Hu
Abstract:
Efficient separation technologies are required for the removal of carbon dioxide from natural gas streams. Membrane-based natural gas separation has emerged as one of the fastest growing technologies, due to the compactness, higher energy efficiency and economic advantages which can be reaped. The removal of Carbon dioxide from gas streams using membrane technology will also give the advantage like environmental friendly process compared to the other technologies used in gas separation. In this study, Polyimide membranes, which are mostly used in the separation of gases, are blended with a new kind of solvent: Deep Eutectic Solvents or simply DES. The three types of DES are used are choline chloride based mixed with three different hydrogen bond donors: Lactic acid, N-methylurea and Urea. The blending of the DESs to Polyimide gave out high permeability performance. The Gas Separation performance for all the membranes involving CO2/CH4 showed low performance while for CO2/N2 surpassed the performance of some studies. Among the three types of DES used the solvent Choline Chloride/Lactic acid exhibited the highest performance for both Gas Separation applications. The values are 10.5 for CO2/CH4 selectivity and 60.5 for CO2/N2. The separation results for CO2/CH4 may be due to the viscosity of the DESs affecting the morphology of the fabricated membrane thus also impacts the performance. DES/blended Polyimide membranes fabricated are novel and have the potential of a low-cost and environmental friendly application for gas separation.Keywords: deep eutectic solvents, gas separation, polyimide blends, polyimide membranes
Procedia PDF Downloads 3102186 Application of Supervised Deep Learning-based Machine Learning to Manage Smart Homes
Authors: Ahmed Al-Adaileh
Abstract:
Renewable energy sources, domestic storage systems, controllable loads and machine learning technologies will be key components of future smart homes management systems. An energy management scheme that uses a Deep Learning (DL) approach to support the smart home management systems, which consist of a standalone photovoltaic system, storage unit, heating ventilation air-conditioning system and a set of conventional and smart appliances, is presented. The objective of the proposed scheme is to apply DL-based machine learning to predict various running parameters within a smart home's environment to achieve maximum comfort levels for occupants, reduced electricity bills, and less dependency on the public grid. The problem is using Reinforcement learning, where decisions are taken based on applying the Continuous-time Markov Decision Process. The main contribution of this research is the proposed framework that applies DL to enhance the system's supervised dataset to offer unlimited chances to effectively support smart home systems. A case study involving a set of conventional and smart appliances with dedicated processing units in an inhabited building can demonstrate the validity of the proposed framework. A visualization graph can show "before" and "after" results.Keywords: smart homes systems, machine learning, deep learning, Markov Decision Process
Procedia PDF Downloads 2012185 Review on Rainfall Prediction Using Machine Learning Technique
Authors: Prachi Desai, Ankita Gandhi, Mitali Acharya
Abstract:
Rainfall forecast is mainly used for predictions of rainfall in a specified area and determining their future rainfall conditions. Rainfall is always a global issue as it affects all major aspects of one's life. Agricultural, fisheries, forestry, tourism industry and other industries are widely affected by these conditions. The studies have resulted in insufficient availability of water resources and an increase in water demand in the near future. We already have a new forecast system that uses the deep Convolutional Neural Network (CNN) to forecast monthly rainfall and climate changes. We have also compared CNN against Artificial Neural Networks (ANN). Machine Learning techniques that are used in rainfall predictions include ARIMA Model, ANN, LR, SVM etc. The dataset on which we are experimenting is gathered online over the year 1901 to 20118. Test results have suggested more realistic improvements than conventional rainfall forecasts.Keywords: ANN, CNN, supervised learning, machine learning, deep learning
Procedia PDF Downloads 2012184 Developing Environmental Engineering Alternatives for Deep Desulphurization of Transportation Fuels
Authors: Nalinee B. Suryawanshi, Vinay M. Bhandari, Laxmi Gayatri Sorokhaibam, Vivek V. Ranade
Abstract:
Deep desulphurization of transportation fuels is a major environmental concern all over the world and recently prescribed norms for the sulphur content require below 10 ppm sulphur concentrations in fuels such as diesel and gasoline. The existing technologies largely based on catalytic processes such as hydrodesulphurization, oxidation require newer catalysts and demand high cost of deep desulphurization whereas adsorption based processes have limitations due to lower capacity of sulphur removal. The present work is an attempt to provide alternatives for the existing methodologies using a newer non-catalytic process based on hydrodynamic cavitation. The developed process requires appropriate combining of organic and aqueous phases under ambient conditions and passing through a cavitating device such as orifice, venturi or vortex diode. The implosion of vapour cavities formed in the cavitating device generates (in-situ) oxidizing species which react with the sulphur moiety resulting in the removal of sulphur from the organic phase. In this work, orifice was used as a cavitating device and deep desulphurization was demonstrated for removal of thiophene as a model sulphur compound from synthetic fuel of n-octane, toluene and n-octanol. The effect of concentration of sulphur (up to 300 ppm), nature of organic phase and effect of pressure drop (0.5 to 10 bar) was discussed. A very high removal of sulphur content of more than 90% was demonstrated. The process is easy to operate, essentially works at ambient conditions and the ratio of aqueous to organic phase can be easily adjusted to maximise sulphur removal. Experimental studies were also carried out using commercial diesel as a solvent and the results substantiate similar high sulphur removal. A comparison of the two cavitating devices- one with a linear flow and one using vortex flow for effecting pressure drop and cavitation indicates similar trends in terms of sulphur removal behaviour. The developed process is expected to provide an attractive environmental engineering alternative for deep desulphurization of transportation fuels.Keywords: cavitation, petroleum, separation, sulphur removal
Procedia PDF Downloads 379