Search results for: computer networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4852

Search results for: computer networks

1732 Stress Solitary Waves Generated by a Second-Order Polynomial Constitutive Equation

Authors: Tsun-Hui Huang, Shyue-Cheng Yang, Chiou-Fen Shieha

Abstract:

In this paper, a nonlinear constitutive law and a curve fitting, two relationships between the stress-strain and the shear stress-strain for sandstone material were used to obtain a second-order polynomial constitutive equation. Based on the established polynomial constitutive equations and Newton’s second law, a mathematical model of the non-homogeneous nonlinear wave equation under an external pressure was derived. The external pressure can be assumed as an impulse function to simulate a real earthquake source. A displacement response under nonlinear two-dimensional wave equation was determined by a numerical method and computer-aided software. The results show that a suit pressure in the sandstone generates the phenomenon of stress solitary waves.

Keywords: polynomial constitutive equation, solitary, stress solitary waves, nonlinear constitutive law

Procedia PDF Downloads 483
1731 Simulation-Based Unmanned Surface Vehicle Design Using PX4 and Robot Operating System With Kubernetes and Cloud-Native Tooling

Authors: Norbert Szulc, Jakub Wilk, Franciszek Górski

Abstract:

This paper presents an approach for simulating and testing robotic systems based on PX4, using a local Kubernetes cluster. The approach leverages modern cloud-native tools and runs on single-board computers. Additionally, this solution enables the creation of datasets for computer vision and the evaluation of control system algorithms in an end-to-end manner. This paper compares this approach to method commonly used Docker based approach. This approach was used to develop simulation environment for an unmanned surface vehicle (USV) for RoboBoat 2023 by running a containerized configuration of the PX4 Open-source Autopilot connected to ROS and the Gazebo simulation environment.

Keywords: cloud computing, Kubernetes, single board computers, simulation, ROS

Procedia PDF Downloads 63
1730 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN

Procedia PDF Downloads 150
1729 Redox-Mediated Supramolecular Radical Gel

Authors: Sonam Chorol, Sharvan Kumar, Pritam Mukhopadhyay

Abstract:

In biology, supramolecular systems require the use of chemical fuels to stay in sustained nonequilibrium steady states termed dissipative self-assembly in contrast to synthetic self-assembly. Biomimicking these natural dynamic systems, some studies have demonstrated artificial self-assembly under nonequilibrium utilizing various forms of energies (fuel) such as chemical, redox, and pH. Naphthalene diimides (NDIs) are well-known organic molecules in supramolecular architectures with high electron affinity and have applications in controlled electron transfer (ET) reactions, etc. Herein, we report the endergonic ET from tetraphenylborate to highly electron-deficient phosphonium NDI²+ dication to generate NDI•+ radical. The formation of radicals was confirmed by UV-Vis-NIR absorption spectroscopy. Electron-donor and electron-acceptor energy levels were calculated from experimental electrochemistry and theoretical DFT analysis. The HOMO of the electron donor locates below the LUMO of the electro-acceptor. This indicates that electron transfer is endergonic (ΔE°ET = negative). The endergonic ET from NaBPh₄ to NDI²+ dication was achieved thermodynamically by the formation of coupled biphenyl product confirmed by GC-MS analysis. NDI molecule bearing octyl phosphonium at the core and H-bond forming imide moieties at the axial position forms a gel. The rheological properties of purified radical ion NDI⦁+ gels were evaluated. The atomic force microscopy studies reveal the formation of large branching-type networks with a maximum height of 70-80 nm. The endergonic ET from NaBPh₄ to NDI²+ dication was used to design the assembly and disassembly redox reaction cycle using reducing (NaBPh₄) and oxidizing agents (Br₂) as chemical fuels. A part of NaBPh₄ is used to drive assembly, while a fraction of the NaBPh₄ is dissipated by forming a useful product. The system goes back to the disassembled NDI²+ dication state with the addition of Br₂. We think bioinspired dissipative self-assembly is the best approach to developing future lifelike materials with autonomous behavior.

Keywords: Ionic-gel, redox-cycle, self-assembly, useful product

Procedia PDF Downloads 68
1728 Plantation Forests Height Mapping Using Unmanned Aerial System

Authors: Shiming Li, Qingwang Liu, Honggan Wu, Jianbing Zhang

Abstract:

Plantation forests are useful for timber production, recreation, environmental protection and social development. Stands height is an important parameter for the estimation of forest volume and carbon stocks. Although lidar is suitable technology for the vertical parameters extraction of forests, but high costs make it not suitable for operational inventory. With the development of computer vision and photogrammetry, aerial photos from unmanned aerial system can be used as an alternative solution for height mapping. Structure-from-motion (SfM) photogrammetry technique can be used to extract DSM and DEM information. Canopy height model (CHM) can be achieved by subtraction DEM from DSM. Our result shows that overlapping aerial photos is a potential solution for plantation forests height mapping.

Keywords: forest height mapping, plantation forests, structure-from-motion photogrammetry, UAS

Procedia PDF Downloads 269
1727 Radar Fault Diagnosis Strategy Based on Deep Learning

Authors: Bin Feng, Zhulin Zong

Abstract:

Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.

Keywords: radar system, fault diagnosis, deep learning, radar fault

Procedia PDF Downloads 72
1726 [Keynote Speech]: Curiosity, Innovation and Technological Advancements Shaping the Future of Science, Technology, Engineering and Mathematics Education

Authors: Ana Hol

Abstract:

We live in a constantly changing environment where technology has become an integral component of our day to day life. We rely heavily on mobile devices, we search for data via web, we utilise smart home sensors to create the most suited ambiences and we utilise applications to shop, research, communicate and share data. Heavy reliance on technology therefore is creating new connections between STEM (Science, Technology, Engineering and Mathematics) fields which in turn rises a question of what the STEM education of the future should be like? This study was based on the reviews of the six Australian Information Systems students who undertook an international study tour to India where they were given an opportunity to network, communicate and meet local students, staff and business representatives and from them learn about the local business implementations, local customs and regulations. Research identifies that if we are to continue to implement and utilise electronic devices on the global scale, such as for example implement smart cars that can smoothly cross borders, we will need the workforce that will have the knowledge about the cars themselves, their parts, roads and transport networks, road rules, road sensors, road monitoring technologies, graphical user interfaces, movement detection systems as well as day to day operations, legal rules and regulations of each region and country, insurance policies, policing and processes so that the wide array of sensors can be controlled across country’s borders. In conclusion, it can be noted that allowing students to learn about the local conditions, roads, operations, business processes, customs and values in different countries is giving students a cutting edge advantage as such knowledge cannot be transferred via electronic sources alone. However once understanding of each problem or project is established, multidisciplinary innovative STEM projects can be smoothly conducted.

Keywords: STEM, curiosity, innovation, advancements

Procedia PDF Downloads 185
1725 A Survey on Smart Security Mechanism Using Graphical Passwords

Authors: Aboli Dhanavade, Shweta Bhimnath, Rutuja Jumale, Ajay Nadargi

Abstract:

Security to any of our personal thing is our most basic need. It is not possible to directly apply that standard Human-computer—interaction approaches. Important usability goal for authentication system is to support users in selecting best passwords. Users often select text-passwords that are easy to remember, but they are more open for attackers to guess. The human brain is good in remembering pictures rather than textual characters. So the best alternative is being designed that is Graphical passwords. However, Graphical passwords are still immature. Conventional password schemes are also vulnerable to Shoulder-surfing attacks, many shoulder-surfing resistant graphical passwords schemes have been proposed. Next, we have analyzed the security and usability of the proposed scheme, and show the resistance of the proposed scheme to shoulder-surfing and different accidental logins.

Keywords: shoulder-surfing, security, authentication, text-passwords

Procedia PDF Downloads 352
1724 Spatial and Temporal Evaluations of Disinfection By-Products Formation in Coastal City Distribution Systems of Turkey

Authors: Vedat Uyak

Abstract:

Seasonal variations of trihalomethanes (THMs) and haloacetic acids (HAAs) concentrations were investigated within three distribution systems of a coastal city of Istanbul, Turkey. Moreover, total trihalomethanes and other organics concentration were also analyzed. The investigation was based on an intensive 16 month (2009-2010) sampling program, undertaken during the spring, summer, fall and winter seasons. Four THM (chloroform, dichlorobromomethane, chlorodibromomethane, bromoform), and nine HAA (the most commonly occurring one being dichloroacetic acid (DCAA) and trichloroacetic acid (TCAA); other compounds are monochloroacetic acid (MCAA), monobromoacetic acid (MBAA), dibromoacetic acid (DBAA), tribromoacetic acid (TBAA), bromochloroacetic acid (BCAA), bromodichloroacetic acid (BDCAA) and chlorodibromoacetic acid (CDBAA)) species and other water quality and operational parameters were monitored at points along the distribution system between the treatment plant and the system’s extremity. The effects of coastal water sources, seasonal variation and spatial variation were examined. The results showed that THMs and HAAs concentrations vary significantly between treated waters and water at the distribution networks. When water temperature exceeds 26°C in summer, the THMs and HAAs levels are 0.8 – 1.1, and 0.4 – 0.9 times higher than treated water, respectively. While when water temperature is below 12°C in the winter, the measured THMs and HAAs concentrations at the system’s extremity were very rarely higher than 100 μg/L, and 60 μg/L, respectively. The highest THM concentrations occurred in the Buyukcekmece distribution system, with an average total HAA concentration of 92 μg/L. Moreover, the lowest THM levels were observed in the Omerli distribution network, with a mean concentration of 7 μg/L. For HAA levels, the maximum concentrations again were observed in the Buyukcekmece distribution system, with an average total HAA concentration of 57 μg/l. High spatial and seasonal variation of disinfection by-products in the drinking water of Istanbul was attributed of illegal wastewater discharges to water supplies of Istanbul city.

Keywords: disinfection byproducts, drinking water, trihalomethanes, haloacetic acids, seasonal variation

Procedia PDF Downloads 136
1723 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks

Authors: Wang Yichen, Haruka Yamashita

Abstract:

In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.

Keywords: recurrent neural network, players lineup, basketball data, decision making model

Procedia PDF Downloads 118
1722 Calibration and Validation of ArcSWAT Model for Estimation of Surface Runoff and Sediment Yield from Dhangaon Watershed

Authors: M. P. Tripathi, Priti Tiwari

Abstract:

Soil and Water Assessment Tool (SWAT) is a distributed parameter continuous time model and was tested on daily and fortnightly basis for a small agricultural watershed (Dhangaon) of Chhattisgarh state in India. The SWAT model recently interfaced with ArcGIS and called as ArcSWAT. The watershed and sub-watershed boundaries, drainage networks, slope and texture maps were generated in the environment of ArcGIS of ArcSWAT. Supervised classification method was used for land use/cover classification from satellite imageries of the years 2009 and 2012. Manning's roughness coefficient 'n' for overland flow and channel flow and Fraction of Field Capacity (FFC) were calibrated for monsoon season of the years 2009 and 2010. The model was validated on a daily basis for the years 2011 and 2012 by using the observed daily rainfall and temperature data. Calibration and validation results revealed that the model was predicting the daily surface runoff and sediment yield satisfactorily. Sensitivity analysis showed that the annual sediment yield was inversely proportional to the overland and channel 'n' values whereas; annual runoff and sediment yields were directly proportional to the FFC. The model was also tested (calibrated and validated) for the fortnightly runoff and sediment yield for the year 2009-10 and 2011-12, respectively. Simulated values of fortnightly runoff and sediment yield for the calibration and validation years compared well with their observed counterparts. The calibration and validation results revealed that the ArcSWAT model could be used for identification of critical sub-watershed and for developing management scenarios for the Dhangaon watershed. Further, the model should be tested for simulating the surface runoff and sediment yield using generated rainfall and temperature before applying it for developing the management scenario for the critical or priority sub-watersheds.

Keywords: watershed, hydrologic and water quality, ArcSWAT model, remote sensing, GIS, runoff and sediment yield

Procedia PDF Downloads 362
1721 Establishing the Optimum Location of a Single Tower Crane Using a Smart Mathematical Model

Authors: Yasser Abo El-Magd, Wael Fawzy Mohamed

Abstract:

Due to the great development in construction and building field, there are many projects and huge works appeared which consume many construction materials. Accordingly, that causes difficulty in handling traditional transportation means (ordinary cranes) due to their limited capacity; there is an urgent need to use high capacity cranes such as tower cranes. However, with regard to their high expense, we have to take into consideration selecting what type of cranes to be utilized which has been discussed by many researchers. In this research, a proposed technique was created to select the suitable type of crane and the best place for crane erection, in addition to minimum radius for requested crane in order to minimize cost. To fulfill that target, a computer program is designed to numerate these problems, demonstrating an example explaining how to apply program and the result donated the best place.

Keywords: tower crane, jib length, operating time, location, feasible area

Procedia PDF Downloads 205
1720 The Role of Social Influences and Cultural Beliefs on Perceptions of Postpartum Depression among Mexican Origin Mothers in San Diego

Authors: Mireya Mateo Gomez

Abstract:

The purpose of this study was to examine the perceptions first-generation Mexican origin mothers living in San Diego have on postpartum depression (PPD), with a special focus on social influences and cultural beliefs towards those meanings. This study also aimed to examine possible PPD help-seeking behaviors that first-generation Mexican origin mothers can perform. The Health Belief Model (HBM) and Social Ecological Model (SEM) were the guiding theoretical frameworks for this study. Data for this study were collected from three focus groups, four in-depth interviews, and the distribution of an acculturation survey (ARSMA II). There were a total of 15 participants, in which participant’s mean age was 45, and the mean age migrated to the United States being 22. Most participants identified as being married, born in Southern or Western Mexico, and with a strong Mexican identity in relation to the ARSMA survey. Participants identified four salient PPD perceptions corresponding to the interpersonal level of SEM. These four main perceptions were: 1) PPD affecting the identity of motherhood; 2) PPD being a natural part of a mother’s experience but mitigated by networks; 3) PPD being a U.S. phenomenon due to family and community breakdown; and 4) natural remedies as a preferred PPD treatment. In regard to themes relating to help seeking behaviors, participants identified seven being: 1) seeking help from immediate family members; 2) practicing home remedies; 3) seeking help from a medical professional; 4) obtaining help from a clinic or organization; 5) seeking help from God; 6) participating in PPD support groups; and 7) talking to a friend. It was evident in this study that postpartum depression is not a well discussed topic within the Mexican immigrant population. In relation to the role culture and social influences have on PPD perceptions, most participants shared hearing or learning about PPD from their family members or friends. Participants also stated seeking help from family members if diagnosed with PPD and seeking out home remedies. This study as well provides suggestions to increase the awareness of PPD among the Mexican immigrant community.

Keywords: cultural beliefs, health belief model, Mexican origin mothers, perceptions, postpartum depression social ecological model

Procedia PDF Downloads 138
1719 Optimization of Hydraulic Fracturing for Horizontal Wells in Enhanced Geothermal Reservoirs

Authors: Qudratullah Muradi

Abstract:

Geothermal energy is a renewable energy source that can be found in abundance on our planet. Only a small fraction of it is currently converted to electrical power, though in recent years installed geothermal capacity has increased considerably all over the world. In this paper, we assumed a model for designing of Enhanced Geothermal System, EGS. We used computer modeling group, CMG reservoir simulation software to create the typical Hot Dry Rock, HDR reservoir. In this research two wells, one injection of cold water and one production of hot water are included in the model. There are some hydraulic fractures created by the mentioned software. And cold water is injected in order to produce energy from the reservoir. The result of injecting cold water to the reservoir and extracting geothermal energy is defined by some graphs at the end of this research. The production of energy is quantified in a period of 10 years.

Keywords: geothermal energy, EGS, HDR, hydraulic fracturing

Procedia PDF Downloads 182
1718 Cybercrime: International Police Cooperation with Europol

Authors: Daniel Suarez Alonso

Abstract:

Cybercrime is a growing international threat and a challenge for law enforcement agencies and judicial systems worldwide. International cooperation is necessary to solve this problem because cybercrime knows no borders and often involves multiple jurisdictions, being related to organised crime. The purpose of this article is to analyse international cooperation in the investigation and prosecution of cybercrime, focusing on the framework of the Regulation of the European Union Agency for Law Enforcement Cooperation (EUROPOL), cooperation that takes place between police authorities from different countries. It examines the legal and operational mechanisms in place to facilitate international cooperation in Europe in this area and assesses their effectiveness in the fight against cybercrime. In addition, the study of a Spanish investigation where cooperation with EUROPOL took place will be examined, analyzing how international cooperation was carried out to investigate and track down criminals. Lessons learned from this case will be discussed and recommendations for improving international cooperation in the fight against cybercrime will be proposed.

Keywords: Europol, international cooperation, cybercrime, computer crime, law

Procedia PDF Downloads 56
1717 20 Definitions in 20 Years: Exploring the Evolution of Blended Learning Definitions from 2003-2022

Authors: Damian Gordon, Paul Doyle, Anna Becevel, Tina Baloh

Abstract:

The goal of this research is to explore the evolution of the concept of “blended learning” over a twenty-year period, to see whether or not the conceptualization has remained consistent or if it has become either more specific or more general. To achieve this goal, the term “blended learning” (and variations) was searched for in various bibliographical repositories for each year 2003-2022 to locate a highly cited paper that is not behind a paywall, to locate unique definitions that would be freely available to all academics each year. Each of the twenty unique definitions is explored to identify how they categorize both the Classroom Component and the Computer Component of blended learning, as well as identify which discipline each definition originates from and which country it comes from to see if there are any significant geographical variations. Based on this analysis, trends that appear in the definitions are noted, as well as an overall interpretation of the notion of “Blended Learning.”

Keywords: blended learning, definitions of blended learning, e-learning, thematic searches

Procedia PDF Downloads 114
1716 Amrita Bose-Einstein Condensate Solution Formed by Gold Nanoparticles Laser Fusion and Atmospheric Water Generation

Authors: Montree Bunruanses, Preecha Yupapin

Abstract:

In this work, the quantum material called Amrita (elixir) is made from top-down gold into nanometer particles by fusing 99% gold with a laser and mixing it with drinking water using the atmospheric water (AWG) production system, which is made of water with air. The high energy laser power destroyed the four natural force bindings from gravity-weak-electromagnetic and strong coupling forces, where finally it was the purified Bose-Einstein condensate (BEC) states. With this method, gold atoms in the form of spherical single crystals with a diameter of 30-50 nanometers are obtained and used. They were modulated (activated) with a frequency generator into various matrix structures mixed with AWG water to be used in the upstream conversion (quantum reversible) process, which can be applied on humans both internally or externally by drinking or applying on the treated surfaces. Doing both space (body) and time (mind) will go back to the origin and start again from the coupling of space-time on both sides of time at fusion (strong coupling force) and push out (Big Bang) at the equilibrium point (singularity) occurs as strings and DNA with neutrinos as coupling energy. There is no distortion (purification), which is the point where time and space have not yet been determined, and there is infinite energy. Therefore, the upstream conversion is performed. It is reforming DNA to make it be purified. The use of Amrita is a method used for people who cannot meditate (quantum meditation). Various cases were applied, where the results show that the Amrita can make the body and the mind return to their pure origins and begin the downstream process with the Big Bang movement, quantum communication in all dimensions, DNA reformation, frequency filtering, crystal body forming, broadband quantum communication networks, black hole forming, quantum consciousness, body and mind healing, etc.

Keywords: quantum materials, quantum meditation, quantum reversible, Bose-Einstein condensate

Procedia PDF Downloads 57
1715 Image Processing-Based Maize Disease Detection Using Mobile Application

Authors: Nathenal Thomas

Abstract:

In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.

Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot

Procedia PDF Downloads 62
1714 Environmental Performance Measurement for Network-Level Pavement Management

Authors: Jessica Achebe, Susan Tighe

Abstract:

The recent Canadian infrastructure report card reveals the unhealthy state of municipal infrastructure intensified challenged faced by municipalities to maintain adequate infrastructure performance thresholds and meet user’s required service levels. For a road agency, huge funding gap issue is inflated by growing concerns of the environmental repercussion of road construction, operation and maintenance activities. As the reduction of material consumption and greenhouse gas emission when maintain and rehabilitating road networks can achieve added benefits including improved life cycle performance of pavements, reduced climate change impacts and human health effect due to less air pollution, improved productivity due to optimal allocation of resources and reduced road user cost. Incorporating environmental sustainability measure into pavement management is solution widely cited and studied. However measuring the environmental performance of road network is still a far-fetched practice in road network management, more so an ostensive agency-wide environmental sustainability or sustainable maintenance specifications is missing. To address this challenge, this present research focuses on the environmental sustainability performance of network-level pavement management. The ultimate goal is to develop a framework to incorporate environmental sustainability in pavement management systems for network-level maintenance programming. In order to achieve this goal, this study reviewed previous studies that employed environmental performance measures, as well as the suitability of environmental performance indicators for the evaluation of the sustainability of network-level pavement maintenance strategies. Through an industry practice survey, this paper provides a brief forward regarding the pavement manager motivations and barriers to making more sustainable decisions, and data needed to support the network-level environmental sustainability. The trends in network-level sustainable pavement management are also presented, existing gaps are highlighted, and ideas are proposed for sustainable network-level pavement management.

Keywords: pavement management, sustainability, network-level evaluation, environment measures

Procedia PDF Downloads 199
1713 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 222
1712 A Case Study: Social Network Analysis of Construction Design Teams

Authors: Elif D. Oguz Erkal, David Krackhardt, Erica Cochran-Hameen

Abstract:

Even though social network analysis (SNA) is an abundantly studied concept for many organizations and industries, a clear SNA approach to the project teams has not yet been adopted by the construction industry. The main challenges for performing SNA in construction and the apparent reason for this gap is the unique and complex structure of each construction project, the comparatively high circulation of project team members/contributing parties and the variety of authentic problems for each project. Additionally, there are stakeholders from a variety of professional backgrounds collaborating in a high-stress environment fueled by time and cost constraints. Within this case study on Project RE, a design & build project performed at the Urban Design Build Studio of Carnegie Mellon University, social network analysis of the project design team will be performed with the main goal of applying social network theory to construction project environments. The research objective is to determine a correlation between the network of how individuals relate to each other on one’s perception of their own professional strengths and weaknesses and the communication patterns within the team and the group dynamics. Data is collected through a survey performed over four rounds conducted monthly, detailed follow-up interviews and constant observations to assess the natural alteration in the network with the effect of time. The data collected is processed by the means of network analytics and in the light of the qualitative data collected with observations and individual interviews. This paper presents the full ethnography of this construction design team of fourteen architecture students based on an elaborate social network data analysis over time. This study is expected to be used as an initial step to perform a refined, targeted and large-scale social network data collection in construction projects in order to deduce the impacts of social networks on project performance and suggest better collaboration structures for construction project teams henceforth.

Keywords: construction design teams, construction project management, social network analysis, team collaboration, network analytics

Procedia PDF Downloads 188
1711 Effects of Artificial Nectar Feeders on Bird Distribution and Erica Visitation Rate in the Cape Fynbos

Authors: Monique Du Plessis, Anina Coetzee, Colleen L. Seymour, Claire N. Spottiswoode

Abstract:

Artificial nectar feeders are used to attract nectarivorous birds to gardens and are increasing in popularity. The costs and benefits of these feeders remain controversial, however. Nectar feeders may have positive effects by attracting nectarivorous birds towards suburbia, facilitating their urban adaptation, and supplementing bird diets when floral resources are scarce. However, this may come at the cost of luring them away from the plants they pollinate in neighboring indigenous vegetation. This study investigated the effect of nectar feeders on an African pollinator-plant mutualism. Given that birds are important pollinators to many fynbos plant species, this study was conducted in gardens and natural vegetation along the urban edge of the Cape Peninsula. Feeding experiments were carried out to compare relative bird abundance and local distribution patterns for nectarivorous birds (i.e., sunbirds and sugarbirds) between feeder and control treatments. Resultant changes in their visitation rates to Erica flowers in the natural vegetation were tested by inspection of their anther ring status. Nectar feeders attracted higher densities of nectarivores to gardens relative to natural vegetation and decreased their densities in the neighboring fynbos, even when floral abundance in the neighboring vegetation was high. The consequent changes to their distribution patterns and foraging behavior decreased their visitation to at least Erica plukenetii flowers (but not to Erica abietina). This study provides evidence that nectar feeders may have positive effects for birds themselves by reducing their urban sensitivity but also highlights the unintended negative effects feeders may have on the surrounding fynbos ecosystem. Given that nectar feeders appear to compete with the flowers of Erica plukenetii, and perhaps those of other Erica species, artificial feeding may inadvertently threaten bird-plant pollination networks.

Keywords: avian nectarivores, bird feeders, bird pollination, indirect effects in human-wildlife interactions, sugar water feeders, supplementary feeding

Procedia PDF Downloads 141
1710 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 91
1709 Mathematical Modeling of a Sub-Wet Bulb Temperature Evaporative Cooling Using Porous Ceramic Materials

Authors: Meryem Kanzari, Rabah Boukhanouf, Hatem G. Ibrahim

Abstract:

Indirect Evaporative Cooling process has the advantage of supplying cool air at constant moisture content. However, such system can only supply air at temperatures above wet bulb temperature. This paper presents a mathematical model for a sub-wet bulb temperature indirect evaporative cooling arrangement that can overcome this limitation and supply cool air at temperatures approaching dew point and without increasing its moisture content. In addition, the use of porous ceramics as wet media materials offers the advantage of integration into building elements. Results of the computer show that the proposed design is capable of cooling air to temperatures lower than the ambient wet bulb temperature and achieving wet bulb effectiveness of about 1.17.

Keywords: indirect evaporative cooling, porous ceramic, sub-wet bulb temperature, mathematical modeling

Procedia PDF Downloads 281
1708 3D Modeling of Flow and Sediment Transport in Tanks with the Influence of Cavity

Authors: A. Terfous, Y. Liu, A. Ghenaim, P. A. Garambois

Abstract:

With increasing urbanization worldwide, it is crucial to sustainably manage sediment flows in urban networks and especially in stormwater detention basins. One key aspect is to propose optimized designs for detention tanks in order to best reduce flood peak flows and in the meantime settle particles. It is, therefore, necessary to understand complex flows patterns and sediment deposition conditions in stormwater detention basins. The aim of this paper is to study flow structure and particle deposition pattern for a given tank geometry in view to control and maximize sediment deposition. Both numerical simulation and experimental works were done to investigate the flow and sediment distribution in a storm tank with a cavity. As it can be indicated, the settle distribution of the particle in a rectangular tank is mainly determined by the flow patterns and the bed shear stress. The flow patterns in a rectangular tank differ with different geometry, entrance flow rate and the water depth. With the changing of flow patterns, the bed shear stress will change respectively, which also play an influence on the particle settling. The accumulation of the particle in the bed changes the conditions at the bottom, which is ignored in the investigations, however it worth much more attention, the influence of the accumulation of the particle on the sedimentation should be important. The approach presented here is based on the resolution of the Reynolds averaged Navier-Stokes equations to account for turbulent effects and also a passive particle transport model. An analysis of particle deposition conditions is presented in this paper in terms of flow velocities and turbulence patterns. Then sediment deposition zones are presented thanks to the modeling with particle tracking method. It is shown that two recirculation zones seem to significantly influence sediment deposition. Due to the possible overestimation of particle trap efficiency with standard wall functions and stick conditions, further investigations seem required for basal boundary conditions based on turbulent kinetic energy and shear stress. These observations are confirmed by experimental investigations processed in the laboratory.

Keywords: storm sewers, sediment deposition, numerical simulation, experimental investigation

Procedia PDF Downloads 310
1707 Video Games Technologies Approach for Their Use in the Classroom

Authors: Daniel Vargas-Herrera, Ivette Caldelas, Fernando Brambila-Paz, Rodrigo Montufar-Chaveznava

Abstract:

In this paper, we present the advances corresponding to the implementation of a set of educational materials based on video games technologies. Essentially these materials correspond to projects developed and under development as bachelor thesis of some Computer Engineering students of the Engineering School. All materials are based on the Unity SDK; integrating some devices such as kinect, leap motion, oculus rift, data gloves and Google cardboard. In detail, we present a virtual reality application for neurosciences students (suitable for neural rehabilitation), and virtual scenes for the Google cardboard, which will be used by the psychology students for phobias treatment. The objective is these materials will be located at a server to be available for all students, in the classroom or in the cloud, considering the use of smartphones has been widely extended between students.

Keywords: virtual reality, interactive technologies, video games, educational materials

Procedia PDF Downloads 642
1706 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 277
1705 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks

Authors: Van Trieu, Shouhuai Xu, Yusheng Feng

Abstract:

Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.

Keywords: causality, multilevel graph, cyber-attacks, prediction

Procedia PDF Downloads 148
1704 The Paralinguistic Function of Emojis in Twitter Communication

Authors: Yasmin Tantawi, Mary Beth Rosson

Abstract:

In response to the dearth of information about emoji use for different purposes in different settings, this paper investigates the paralinguistic function of emojis within Twitter communication in the United States. To conduct this investigation, the Twitter feeds from 16 population centers spread throughout the United States were collected from the Twitter public API. One hundred tweets were collected from each population center, totaling to 1,600 tweets. Tweets containing emojis were next extracted using the “emot” Python package; these were then analyzed via the IBM Watson API Natural Language Understanding module to identify the topics discussed. A manual content analysis was then conducted to ascertain the paralinguistic and emotional features of the emojis used in these tweets. We present our characterization of emoji usage in Twitter and discuss implications for the design of Twitter and other text-based communication tools.

Keywords: computer-mediated communication, content analysis, paralinguistics, sociology

Procedia PDF Downloads 152
1703 Self-Assembled Tin Particles Made by Plasma-Induced Dewetting

Authors: Han Joo Choe, Soon-Ho Kwon, Jung-Joong Lee

Abstract:

Tin particles of various size and distribution were self-assembled by plasma treating tin film deposited on silicon oxide substrates. Plasma treatment was conducted using an inductively coupled plasma (ICP) source. A range of ICP power and topographic templated substrates were evaluated to observe changes in particle size and particle distribution. Scanning electron microscopy images of the particles were analyzed using computer software. The evolution of tin film dewetting into particles initiated from the hole nucleation in grain boundaries. Increasing ICP power during plasma treatment produced larger number of particles per area and smaller particle size and particle-size distribution. Topographic templates were also effective in positioning and controlling the size of the particles. By combining the effects of ICP power and topographic templates, particles of similar size and well-ordered distribution were obtained.

Keywords: dewetting, particles, plasma, tin

Procedia PDF Downloads 245