Search results for: artificial neuron network
2021 A Hybrid System of Hidden Markov Models and Recurrent Neural Networks for Learning Deterministic Finite State Automata
Authors: Pavan K. Rallabandi, Kailash C. Patidar
Abstract:
In this paper, we present an optimization technique or a learning algorithm using the hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov models (HMMs). In order to improve the sequence or pattern recognition/ classification performance by applying a hybrid/neural symbolic approach, a gradient descent learning algorithm is developed using the Real Time Recurrent Learning of Recurrent Neural Network for processing the knowledge represented in trained Hidden Markov Models. The developed hybrid algorithm is implemented on automata theory as a sample test beds and the performance of the designed algorithm is demonstrated and evaluated on learning the deterministic finite state automata.Keywords: hybrid systems, hidden markov models, recurrent neural networks, deterministic finite state automata
Procedia PDF Downloads 3882020 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents
Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty
Abstract:
A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.Keywords: abstractive summarization, deep learning, natural language Processing, patent document
Procedia PDF Downloads 1232019 The Sustainable Governance of Aquifer Injection Using Treated Coal Seam Gas Water in Queensland, Australia: Lessons for Integrated Water Resource Management
Authors: Jacqui Robertson
Abstract:
The sustainable governance of groundwater is of the utmost importance in an arid country like Australia. Groundwater has been relied on by our agricultural and pastoral communities since the State was settled by European colonialists. Nevertheless, the rapid establishment of a coal seam gas (CSG) industry in Queensland, Australia, has had extensive impacts on the pre-existing groundwater users. Managed aquifer recharge of important aquifers in Queensland, Australia, using treated coal seam gas produced water has been used to reduce the impacts of CSG development in Queensland Australia. However, the process has not been widely adopted. Negative environmental outcomes are now acknowledged as not only engineering, scientific or technical problems to be solved but also the result of governance failures. An analysis of the regulatory context for aquifer injection using treated CSG water in Queensland, Australia, using Ostrom’s Common Pool Resource (CPR) theory and a ‘heat map’ designed by the author, highlights the importance of governance arrangements. The analysis reveals the costs and benefits for relevant stakeholders of artificial recharge of groundwater resources in this context. The research also reveals missed opportunities to further active management of the aquifer and resolve existing conflicts between users. The research illustrates the importance of strategically and holistically evaluating innovations in technology that impact water resources to reveal incentives that impact resource user behaviors. The paper presents a proactive step that can be adapted to support integrated water resource management and sustainable groundwater development.Keywords: managed aquifer recharge, groundwater regulation, common-pool resources, integrated water resource management, Australia
Procedia PDF Downloads 2372018 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics
Authors: Jingsi Li, Neil S. Ferguson
Abstract:
Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management
Procedia PDF Downloads 1122017 Application of Generalized Autoregressive Score Model to Stock Returns
Authors: Katleho Daniel Makatjane, Diteboho Lawrence Xaba, Ntebogang Dinah Moroke
Abstract:
The current study investigates the behaviour of time-varying parameters that are based on the score function of the predictive model density at time t. The mechanism to update the parameters over time is the scaled score of the likelihood function. The results revealed that there is high persistence of time-varying, as the location parameter is higher and the skewness parameter implied the departure of scale parameter from the normality with the unconditional parameter as 1.5. The results also revealed that there is a perseverance of the leptokurtic behaviour in stock returns which implies the returns are heavily tailed. Prior to model estimation, the White Neural Network test exposed that the stock price can be modelled by a GAS model. Finally, we proposed further researches specifically to model the existence of time-varying parameters with a more detailed model that encounters the heavy tail distribution of the series and computes the risk measure associated with the returns.Keywords: generalized autoregressive score model, South Africa, stock returns, time-varying
Procedia PDF Downloads 5002016 When Ideological Intervention Backfires: The Case of the Iranian Clerical System’s Intervention in the Pandemic-Era Elementary Education
Authors: Hasti Ebrahimi
Abstract:
This study sheds light on the challenges and difficulties caused by the Iranian clerical system’s intervention in the country’s school education during the COVID-19 pandemic, when schools remained closed for almost two years. The pandemic brought Iranian elementary school education to a standstill for almost 6 months before the country developed a nationwide learning platform – a customized television network. While the initiative seemed to have been welcomed by the majority of Iranian parents, it resented some of the more traditional strata of the society, including the influential Friday Prayer Leaders who found the televised version of the elementary education ‘less spiritual’ and ‘more ‘material’ or science-based. That prompted the Iranian Channel of Education, the specialized television network that had been chosen to serve as a nationally televised school during the pandemic, to try to redefine much of its online elementary school educational content within the religious ideology of the Islamic Republic of Iran. As a result, young clergies appeared on the television screen as preachers of Islamic morality, religious themes and even sociology, history, and arts. The present research delves into the consequences of such an intervention, how it might have impacted the infrastructure of Iranian elementary education and whether or not the new ideology-infused curricula would withstand the opposition of students and mainstream teachers. The main methodology used in this study is Critical Discourse Analysis with a cognitive approach. It systematically finds and analyzes the alternative ideological structures of discourse in the Iranian Channel of Education from September 2021 to July 2022, when the clergy ‘teachers’ replaced ‘regular’ history and arts teachers on the television screen for the first time. It has aimed to assess how the various uses of the alternative ideological discourse in elementary school content have influenced the processes of learning: the acquisition of knowledge, beliefs, opinions, attitudes, abilities, and other cognitive and emotional changes, which are the goals of institutional education. This study has been an effort aimed at understanding and perhaps clarifying the relationships between the traditional textual structures and processing on the one hand and socio-cultural contexts created by the clergy teachers on the other. This analysis shows how the clerical portion of elementary education on the Channel of Education that seemed to have dominated the entire televised teaching and learning process faded away as the pandemic was contained and mainstream classes were restored. It nevertheless reflects the deep ideological rifts between the clerical approach to school education and the mainstream teaching process in Iranian schools. The semantic macrostructures of social content in the current Iranian elementary school education, this study suggests, have remained intact despite the temporary ideological intervention of the ruling clerical elite in their formulation and presentation. Finally, using thematic and schematic frameworks, the essay suggests that the ‘clerical’ social content taught on the Channel of Education during the pandemic cannot have been accepted cognitively by the channel’s target audience, including students and mainstream teachers.Keywords: televised elementary school learning, Covid 19, critical discourse analysis, Iranian clerical ideology
Procedia PDF Downloads 542015 Secret Sharing in Visual Cryptography Using NVSS and Data Hiding Techniques
Authors: Misha Alexander, S. B. Waykar
Abstract:
Visual Cryptography is a special unbreakable encryption technique that transforms the secret image into random noisy pixels. These shares are transmitted over the network and because of its noisy texture it attracts the hackers. To address this issue a Natural Visual Secret Sharing Scheme (NVSS) was introduced that uses natural shares either in digital or printed form to generate the noisy secret share. This scheme greatly reduces the transmission risk but causes distortion in the retrieved secret image through variation in settings and properties of digital devices used to capture the natural image during encryption / decryption phase. This paper proposes a new NVSS scheme that extracts the secret key from randomly selected unaltered multiple natural images. To further improve the security of the shares data hiding techniques such as Steganography and Alpha channel watermarking are proposed.Keywords: decryption, encryption, natural visual secret sharing, natural images, noisy share, pixel swapping
Procedia PDF Downloads 4042014 Ag and Au Nanoparticles Fabrication in Cross-Linked Polymer Microgels for Their Comparative Catalytic Study
Authors: Luqman Ali Shah, Murtaza Sayed, Mohammad Siddiq
Abstract:
Three-dimensional cross-linked polymer microgels with temperature responsive N-isopropyl acrylamide (NIPAM) and pH-sensitive methacrylic acid (MAA) were successfully synthesized by free radical emulsion polymerization with different amount of MAA. Silver and gold nanoparticles with size of 6.5 and 3.5 nm (±0.5 nm) respectively were homogeneously reduced inside these materials by chemical reduction method at pH 2.78 and 8.36 for the preparation of hybrid materials. The samples were characterized by FTIR, DLS and TEM techniques. The catalytic activity of the hybrid materials was investigated for the reduction of 4-nitrophenol (4- NP) using NaBH4 as reducing agent by UV-visible spectroscopy. The hybrid polymer network synthesized at pH 8.36 shows enhanced catalytic efficiency compared to catalysts synthesized at pH 2.78. In this study, it has been explored that catalyst activity strongly depends on amount of MAA, synthesis pH and type of metal nanoparticles entrapped.Keywords: cross-linked polymer microgels, free radical polymerization, metal nanoparticles, catalytic activity, comparative study
Procedia PDF Downloads 3242013 MAGNI Dynamics: A Vision-Based Kinematic and Dynamic Upper-Limb Model for Intelligent Robotic Rehabilitation
Authors: Alexandros Lioulemes, Michail Theofanidis, Varun Kanal, Konstantinos Tsiakas, Maher Abujelala, Chris Collander, William B. Townsend, Angie Boisselle, Fillia Makedon
Abstract:
This paper presents a home-based robot-rehabilitation instrument, called ”MAGNI Dynamics”, that utilized a vision-based kinematic/dynamic module and an adaptive haptic feedback controller. The system is expected to provide personalized rehabilitation by adjusting its resistive and supportive behavior according to a fuzzy intelligence controller that acts as an inference system, which correlates the user’s performance to different stiffness factors. The vision module uses the Kinect’s skeletal tracking to monitor the user’s effort in an unobtrusive and safe way, by estimating the torque that affects the user’s arm. The system’s torque estimations are justified by capturing electromyographic data from primitive hand motions (Shoulder Abduction and Shoulder Forward Flexion). Moreover, we present and analyze how the Barrett WAM generates a force-field with a haptic controller to support or challenge the users. Experiments show that by shifting the proportional value, that corresponds to different stiffness factors of the haptic path, can potentially help the user to improve his/her motor skills. Finally, potential areas for future research are discussed, that address how a rehabilitation robotic framework may include multisensing data, to improve the user’s recovery process.Keywords: human-robot interaction, kinect, kinematics, dynamics, haptic control, rehabilitation robotics, artificial intelligence
Procedia PDF Downloads 3292012 A Framework for Auditing Multilevel Models Using Explainability Methods
Authors: Debarati Bhaumik, Diptish Dey
Abstract:
Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics
Procedia PDF Downloads 942011 A Case Study of Assessing the Impact of Electronic Payment System on the Service Delivery of Banks in Nigeria
Authors: Idris Lawal
Abstract:
Electronic payment system is simply a payment or monetary transaction made over the internet or a network of computers. This study was carried out in order to assess how electronic payment system has impacted on banks service delivery, to examine the efficiency of electronic payment system in Nigeria and to determine the level of customer's satisfaction as a direct result of the deployment of electronic payment systems. It is an empirical study conducted using structured questionnaire distributed to officials and customers of Access Bank plc. Chi-square(x2) was adopted for the purpose of data analysis. The result of the study showed that the development of electronic payment system offer great benefit to bank customers including improved services, reduced turn-around time, ease of banking transaction, significant cost saving etc. The study recommends that customer protection laws should be properly put in place to safeguard the interest of end users of e-payment instruments.Keywords: bank, electronic payment systems, service delivery, customer's satisfaction
Procedia PDF Downloads 3992010 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 3162009 Developed CNN Model with Various Input Scale Data Evaluation for Bearing Faults Prognostics
Authors: Anas H. Aljemely, Jianping Xuan
Abstract:
Rolling bearing fault diagnosis plays a pivotal issue in the rotating machinery of modern manufacturing. In this research, a raw vibration signal and improved deep learning method for bearing fault diagnosis are proposed. The multi-dimensional scales of raw vibration signals are selected for evaluation condition monitoring system, and the deep learning process has shown its effectiveness in fault diagnosis. In the proposed method, employing an Exponential linear unit (ELU) layer in a convolutional neural network (CNN) that conducts the identical function on positive data, an exponential nonlinearity on negative inputs, and a particular convolutional operation to extract valuable features. The identification results show the improved method has achieved the highest accuracy with a 100-dimensional scale and increase the training and testing speed.Keywords: bearing fault prognostics, developed CNN model, multiple-scale evaluation, deep learning features
Procedia PDF Downloads 2102008 Neural Networks and Genetic Algorithms Approach for Word Correction and Prediction
Authors: Rodrigo S. Fonseca, Antônio C. P. Veiga
Abstract:
Aiming at helping people with some movement limitation that makes typing and communication difficult, there is a need to customize an assistive tool with a learning environment that helps the user in order to optimize text input, identifying the error and providing the correction and possibilities of choice in the Portuguese language. The work presents an Orthographic and Grammatical System that can be incorporated into writing environments, improving and facilitating the use of an alphanumeric keyboard, using a prototype built using a genetic algorithm in addition to carrying out the prediction, which can occur based on the quantity and position of the inserted letters and even placement in the sentence, ensuring the sequence of ideas using a Long Short Term Memory (LSTM) neural network. The prototype optimizes data entry, being a component of assistive technology for the textual formulation, detecting errors, seeking solutions and informing the user of accurate predictions quickly and effectively through machine learning.Keywords: genetic algorithm, neural networks, word prediction, machine learning
Procedia PDF Downloads 1942007 Towards an Enhanced Compartmental Model for Profiling Malware Dynamics
Authors: Jessemyn Modiini, Timothy Lynar, Elena Sitnikova
Abstract:
We present a novel enhanced compartmental model for malware spread analysis in cyber security. This paper applies cyber security data features to epidemiological compartmental models to model the infectious potential of malware. Compartmental models are most efficient for calculating the infectious potential of a disease. In this paper, we discuss and profile epidemiologically relevant data features from a Domain Name System (DNS) dataset. We then apply these features to epidemiological compartmental models to network traffic features. This paper demonstrates how epidemiological principles can be applied to the novel analysis of key cybersecurity behaviours and trends and provides insight into threat modelling above that of kill-chain analysis. In applying deterministic compartmental models to a cyber security use case, the authors analyse the deficiencies and provide an enhanced stochastic model for cyber epidemiology. This enhanced compartmental model (SUEICRN model) is contrasted with the traditional SEIR model to demonstrate its efficacy.Keywords: cybersecurity, epidemiology, cyber epidemiology, malware
Procedia PDF Downloads 1072006 Low-Cost IoT System for Monitoring Ground Propagation Waves due to Construction and Traffic Activities to Nearby Construction
Authors: Lan Nguyen, Kien Le Tan, Bao Nguyen Pham Gia
Abstract:
Due to the high cost, specialized dynamic measurement devices for industrial lands are difficult for many colleges to equip for hands-on teaching. This study connects a dynamic measurement sensor and receiver utilizing an inexpensive Raspberry Pi 4 board, some 24-bit ADC circuits, a geophone vibration sensor, and embedded Python open-source programming. Gather and analyze signals for dynamic measuring, ground vibration monitoring, and structure vibration monitoring. The system may wirelessly communicate data to the computer and is set up as a communication node network, enabling real-time monitoring of background vibrations at various locations. The device can be utilized for a variety of dynamic measurement and monitoring tasks, including monitoring earthquake vibrations, ground vibrations from construction operations, traffic, and vibrations of building structures.Keywords: sensors, FFT, signal processing, real-time data monitoring, ground propagation wave, python, raspberry Pi 4
Procedia PDF Downloads 1032005 Mexico's Steam Connections Across the Pacific (1867-1910)
Authors: Ruth Mandujano Lopez
Abstract:
During the second half of the 19th century, in the transition from sail to steam navigation, the transpacific space underwent major transformation. This paper examines the role that the steamship companies between Mexico, the rest of North America and Asia played in that process. Based on primary sources found in Mexico, California, London and Hong Kong, it argues that these companies actively participated in the redefining of the Pacific space as they opened new routes, transported thousands of people and had an impact on regional geopolitics. In order to prove this, the text will present the cases of a handful of companies that emerged between 1867 and 1910 and of some of their passengers. By looking at the way the Mexican ports incorporated to the transpacific steam maritime network, this work contributes to have a better understanding of the role that Latin American ports have played in the formation of a global order. From a theoretical point of view, it proposes the conceptualization of space in the form of transnational networks as a point of departure to conceive a history that is truly global.Keywords: mexico, steamships, transpacific, maritime companies
Procedia PDF Downloads 482004 Predicting Global Solar Radiation Using Recurrent Neural Networks and Climatological Parameters
Authors: Rami El-Hajj Mohamad, Mahmoud Skafi, Ali Massoud Haidar
Abstract:
Several meteorological parameters were used for the prediction of monthly average daily global solar radiation on horizontal using recurrent neural networks (RNNs). Climatological data and measures, mainly air temperature, humidity, sunshine duration, and wind speed between 1995 and 2007 were used to design and validate a feed forward and recurrent neural network based prediction systems. In this paper we present our reference system based on a feed-forward multilayer perceptron (MLP) as well as the proposed approach based on an RNN model. The obtained results were promising and comparable to those obtained by other existing empirical and neural models. The experimental results showed the advantage of RNNs over simple MLPs when we deal with time series solar radiation predictions based on daily climatological data.Keywords: recurrent neural networks, global solar radiation, multi-layer perceptron, gradient, root mean square error
Procedia PDF Downloads 4442003 An Industrial Scada System Remote Control Using Mobile Phones
Authors: Ahmidah Elgali
Abstract:
SCADA is the abbreviation for "Administrative Control And Data Acquisition." SCADA frameworks are generally utilized in industry for administrative control and information securing of modern cycles. Regular SCADA frameworks use PC, journal, slim client, and PDA as a client. In this paper, a Java-empowered cell phone has been utilized as a client in an example SCADA application to show and regulate the place of an example model crane. The paper presents a genuine execution of the online controlling of the model crane through a cell phone. The remote correspondence between the cell phone and the SCADA server is performed through a base station by means of general parcel radio assistance GPRS and remote application convention WAP. This application can be used in industrial sites in areas that are likely to be exposed to a security emergency (like terrorist attacks) which causes the sudden exit of the operators; however, no time to perform the shutdown procedures for the plant. Hence this application allows shutting down units and equipment remotely by mobile and so avoids damage and losses.Keywords: control, industrial, mobile, network, remote, SCADA
Procedia PDF Downloads 782002 Morphometric Parameters and Evaluation of Persian Fallow Deer Semen in Dashenaz Refuge in Iran
Authors: Behrang Ekrami, Amin Tamadon
Abstract:
Persian fallow deer (Dama dama mesopotamica) is belonging to the family Cervidae and is only found in a few protected areas in the northwest, north, and southwest of Iran. The aims of this study were analysis of inbreeding and morphometric parameters of semen in male Persian fallow deer to investigate the cause of reduced fertility of this endangered species in Dasht-e-Naz National Refuge, Sari, Iran. The Persian fallow deer semen was collected from four adult bucks randomly during the breeding and non-breeding season from five dehorned and horned deer's BY an artificial vagina. Twelve blood samples was taken from Persian fallow deer and mitochondrial DNA was extracted, amplified, extracted, sequenced, and then were considered for genetic analysis. The Persian fallow deer semen, both with normal and abnormal spermatozoa, is similar to that of domestic ruminants but very smaller and difficult to observe at the primary observation. The post-mating season collected ejaculates contained abnormal spermatozoa, debris and secretion of accessory glands in horned bucks and accessory glands secretion free of any spermatozoa in dehorned or early velvet budding bucks. Microscopic evaluation in all four bucks during the mating season showed the mean concentration of 9×106 spermatozoa/ml. The mean ±SD of age, testes length and testes width was 4.60±1.52 years, 3.58±0.32 and 1.86±0.09 cm, respectively. The results identified 1120 loci (assuming each nucleotide as locus) in which 377 were polymorphic. In conclusion, reduced fertility of male Persian fallow deer may be caused by inbreeding of the protected herd in a limited area of Dasht-e-Naz National Refuge.Keywords: Persian fallow deer, spermatozoa, reproductive characteristics, morphometric parameters
Procedia PDF Downloads 5772001 A Query Optimization Strategy for Autonomous Distributed Database Systems
Authors: Dina K. Badawy, Dina M. Ibrahim, Alsayed A. Sallam
Abstract:
Distributed database is a collection of logically related databases that cooperate in a transparent manner. Query processing uses a communication network for transmitting data between sites. It refers to one of the challenges in the database world. The development of sophisticated query optimization technology is the reason for the commercial success of database systems, which complexity and cost increase with increasing number of relations in the query. Mariposa, query trading and query trading with processing task-trading strategies developed for autonomous distributed database systems, but they cause high optimization cost because of involvement of all nodes in generating an optimal plan. In this paper, we proposed a modification on the autonomous strategy K-QTPT that make the seller’s nodes with the lowest cost have gradually high priorities to reduce the optimization time. We implement our proposed strategy and present the results and analysis based on those results.Keywords: autonomous strategies, distributed database systems, high priority, query optimization
Procedia PDF Downloads 5242000 Mourning Motivations for Celebrities in Instagram: A Case Study of Mohammadreza Shajarian's Death
Authors: Zahra Afshordi
Abstract:
Instagram, as an everyday life social network, hosts from the ultrasound image of an unborn fetus to the pictures of newly placed gravestones and funerals. It is a platform that allows its users to create a second identity independently from and at the same time in relation to the real space identity. The motives behind this identification are what this article is about. This article studies the motivations of Instagram users mourning for celebrities with a focus on the death of MohammadReza Shajarian. The Shajarian’s death had a wide reflection on Instagram Persian-speaking users. The purpose of this qualitative survey is to comprehend and study the user’s motivations in posting mourning and memorializing content. The methodology of the essay is a hybrid methodology consisting of content analysis and open-ended interviews. The results highlight that users' motives are more than just simple sympathy and include political protest, gaining cultural capital, reaching social status, and escaping from solitude.Keywords: case study, celebrity, identity, Instagram, mourning, qualitative survey
Procedia PDF Downloads 1561999 Quality of Ram Semen in Relation to Scrotal Biometry
Authors: M. M. Islam, S. Sharmin, M. Shah Newaz, N. S. Juyena, M. M. Rahman, P. K. Jha, F. Y. Bari
Abstract:
The aim of the present study was to select the high quality ram by measuring the scrotal biometry which has an effect on semen parameters. Ten rams were selected in the present study. Eight ejaculates were collected from each ram using artificial vagina method. Scrotal circumference was measured before and after semen collection on weekly basis using the Scrotal tape. Bio-metries of scrotum (scrotal length and scrotal volume) were calculated. Semen was evaluated for macroscopic and microscopic characteristics. The average estimated scrotal circumference (cm) and scrotal volume (cm3) in 8 different age groups were 17.16±0.05 cm and 61.30±0.70 cm3, 17.17±0.62 cm and 63.67±4.49 cm3, 17.22±0.52 cm and 64.90±4.21 cm3, 17.72±0.37 cm and 67.10±4.20 cm3, 18.41±0.35cm and 69.52±4.12cm3, 18.45±0.36cm and 77.17±3.81 cm3, 18.55±0.41 cm and 78.72±4.90 cm3, 19.10±0.30 cm and 87.35±5.45 cm3 respectively. The body weight, scrotal circumference and scrotal volume increased with the progress of age (P < 0.05). Body weight of age group 381-410 days (13.62+1.48 kg) was significantly higher than group 169-200 days (10.17±0.05 kg) and 201-230 days (10.42±1.18 kg) (p < 0.05). Scrotal circumference (SC) of age group 381-410 days (19.10±0.30 cm) was significantly higher (p < 0.05) than other groups. In age group 381-410 days, scrotal volume (SCV) (87.35±5.45 cm3) was significantly higher than other first five groups (p < 0.05). Both scrotal circumference and scrotal volume development was positively correlated with the increasing of body weight (R2= 0.51). Semen volume increased accordingly with the increasing of ages, varied from 0.35±0.00 ml to 1.15+0.26 ml. Semen volume of age group 381-410 days (1.15±0.26 ml) was significantly higher than other age groups (p < 0.05) except age group 351-380 days (p > 0.05). Mass activity of different age groups varied from 2.75 (±0.35) to 4.25 (±0.29) ml in the scale of 1-5. Sperm concentration, progressive motility (%),progressively improved according to the increasing of ages, but significant changes in these parameters were seen when the animals reaches the age 291 days or more (p < 0.05). However, normal spermatozoa (%) improved significantly from the age of 261 days or more. Mass activity (mass) was positively correlated with sperm concentration (R2=0.568) and progressive motility (%) (R2=0.616). The relationships of semen volume with body weight and scrotal measurements and sperm concentration indicate that they are useful in evaluating rams for breeding soundness and genetic improvement for fertility in indigenous ram.Keywords: breeding soundness, ram, semen quality, scrotal biometry
Procedia PDF Downloads 3661998 Uncertainty Estimation in Neural Networks through Transfer Learning
Authors: Ashish James, Anusha James
Abstract:
The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions.Keywords: uncertainty estimation, neural networks, transfer learning, regression
Procedia PDF Downloads 1351997 Quantifying Stability of Online Communities and Its Impact on Disinformation
Authors: Victor Chomel, Maziyar Panahi, David Chavalarias
Abstract:
Misinformation has taken an increasingly worrying place in social media. Propagation patterns are closely linked to the structure of communities. This study proposes a method of community analysis based on a combination of centrality indicators for the network and its main communities. The objective is to establish a link between the stability of the communities over time, the social ascension of its members internally, and the propagation of information in the community. To this end, data from the debates about global warming and political communities on Twitter have been collected, and several tens of millions of tweets and retweets have helped us better understand the structure of these communities. The quantification of this stability allows for the study of the propagation of information of any kind, including disinformation. Our results indicate that the most stable communities over time are the ones that enable the establishment of nodes capturing a large part of the information and broadcasting its opinions. Conversely, communities with a high turnover and social ascendancy only stabilize themselves strongly in the face of adversity and external events but seem to offer a greater diversity of opinions most of the time.Keywords: community analysis, disinformation, misinformation, Twitter
Procedia PDF Downloads 1401996 Space Weather and Earthquakes: A Case Study of Solar Flare X9.3 Class on September 6, 2017
Authors: Viktor Novikov, Yuri Ruzhin
Abstract:
The studies completed to-date on a relation of the Earth's seismicity and solar processes provide the fuzzy and contradictory results. For verification of an idea that solar flares can trigger earthquakes, we have analyzed a case of a powerful surge of solar flash activity early in September 2017 during approaching the minimum of 24th solar cycle was accompanied by significant disturbances of space weather. On September 6, 2017, a group of sunspots AR2673 generated a large solar flare of X9.3 class, the strongest flare over the past twelve years. Its explosion produced a coronal mass ejection partially directed towards the Earth. We carried out a statistical analysis of the catalogs of earthquakes USGS and EMSC for determination of the effect of solar flares on global seismic activity. New evidence of earthquake triggering due to the Sun-Earth interaction has been demonstrated by simple comparison of behavior of Earth's seismicity before and after the strong solar flare. The global number of earthquakes with magnitude of 2.5 to 5.5 within 11 days after the solar flare has increased by 30 to 100%. A possibility of electric/electromagnetic triggering of earthquake due to space weather disturbances is supported by results of field and laboratory studies, where the earthquakes (both natural and laboratory) were initiated by injection of electrical current into the Earth crust. For the specific case of artificial electric earthquake triggering the current density at a depth of earthquake, sources are comparable with estimations of a density of telluric currents induced by variation of space weather conditions due to solar flares. Acknowledgment: The work was supported by RFBR grant No. 18-05-00255.Keywords: solar flare, earthquake activity, earthquake triggering, solar-terrestrial relations
Procedia PDF Downloads 1431995 Optimal Load Control Strategy in the Presence of Stochastically Dependent Renewable Energy Sources
Authors: Mahmoud M. Othman, Almoataz Y. Abdelaziz, Yasser G. Hegazy
Abstract:
This paper presents a load control strategy based on modification of the Big Bang Big Crunch optimization method. The proposed strategy aims to determine the optimal load to be controlled and the corresponding time of control in order to minimize the energy purchased from substation. The presented strategy helps the distribution network operator to rely on the renewable energy sources in supplying the system demand. The renewable energy sources used in the presented study are modeled using the diagonal band Copula method and sequential Monte Carlo method in order to accurately consider the multivariate stochastic dependence between wind power, photovoltaic power and the system demand. The proposed algorithms are implemented in MATLAB environment and tested on the IEEE 37-node feeder. Several case studies are done and the subsequent discussions show the effectiveness of the proposed algorithm.Keywords: big bang big crunch, distributed generation, load control, optimization, planning
Procedia PDF Downloads 3441994 Accounting Management Information System for Convenient Shop in Bangkok Thailand
Authors: Anocha Rojanapanich
Abstract:
The purpose of this research is to develop and design an accounting management information system for convenient shop in Bangkok Thailand. The study applied the System Development Life Cycle (SDLC) for development which began with study and analysis of current data, including the existing system. Then, the system was designed and developed to meet users’ requirements via the internet network by use of application software such as My SQL for database management, Product diversity, Apache HTTP Server for Web Server and PHP Hypertext Preprocessor for an interface between web server, database and users. The system was designed into two subsystems as the main system, or system for head office, and the branch system for branch shops. These consisted of three parts which are classified by user management as shop management, inventory management and Point of Sale (POS) management and importance of cost information for decision making also as well as.Keywords: accounting management information system, convenient shop, cost information for decision making system, development life cycle
Procedia PDF Downloads 4201993 Degradation Model for UK Railway Drainage System
Authors: Yiqi Wu, Simon Tait, Andrew Nichols
Abstract:
Management of UK railway drainage assets is challenging due to the large amounts of historical assets with long asset life cycles. A major concern for asset managers is to maintain the required performance economically and efficiently while complying with the relevant regulation and legislation. As the majority of the drainage assets are buried underground and are often difficult or costly to examine, it is important for asset managers to understand and model the degradation process in order to foresee the upcoming reduction in asset performance and conduct proactive maintenance accordingly. In this research, a Markov chain approach is used to model the deterioration process of rail drainage assets. The study is based on historical condition scores and characteristics of drainage assets across the whole railway network in England, Scotland, and Wales. The model is used to examine the effect of various characteristics on the probabilities of degradation, for example, the regional difference in probabilities of degradation, and how material and shape can influence the deterioration process for chambers, channels, and pipes.Keywords: deterioration, degradation, markov models, probability, railway drainage
Procedia PDF Downloads 2211992 A Comprehensive Study of Camouflaged Object Detection Using Deep Learning
Authors: Khalak Bin Khair, Saqib Jahir, Mohammed Ibrahim, Fahad Bin, Debajyoti Karmaker
Abstract:
Object detection is a computer technology that deals with searching through digital images and videos for occurrences of semantic elements of a particular class. It is associated with image processing and computer vision. On top of object detection, we detect camouflage objects within an image using Deep Learning techniques. Deep learning may be a subset of machine learning that's essentially a three-layer neural network Over 6500 images that possess camouflage properties are gathered from various internet sources and divided into 4 categories to compare the result. Those images are labeled and then trained and tested using vgg16 architecture on the jupyter notebook using the TensorFlow platform. The architecture is further customized using Transfer Learning. Methods for transferring information from one or more of these source tasks to increase learning in a related target task are created through transfer learning. The purpose of this transfer of learning methodologies is to aid in the evolution of machine learning to the point where it is as efficient as human learning.Keywords: deep learning, transfer learning, TensorFlow, camouflage, object detection, architecture, accuracy, model, VGG16
Procedia PDF Downloads 149