Search results for: memory network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5748

Search results for: memory network

2238 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images

Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso

Abstract:

Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.

Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence

Procedia PDF Downloads 25
2237 Four-Electron Auger Process for Hollow Ions

Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola

Abstract:

A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.

Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method

Procedia PDF Downloads 154
2236 Terrain Classification for Ground Robots Based on Acoustic Features

Authors: Bernd Kiefer, Abraham Gebru Tesfay, Dietrich Klakow

Abstract:

The motivation of our work is to detect different terrain types traversed by a robot based on acoustic data from the robot-terrain interaction. Different acoustic features and classifiers were investigated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. We achieve precision and recall values between 87% and 100% per class, and an average accuracy at 95.2%. We also study the effect of varying audio chunk size in the application phase of the models and find only a mild impact on performance.

Keywords: acoustic features, autonomous robots, feature extraction, terrain classification

Procedia PDF Downloads 372
2235 A Mathematical Optimization Model for Locating and Fortifying Capacitated Warehouses under Risk of Failure

Authors: Tareq Oshan

Abstract:

Facility location and size decisions are important to any company because they affect profitability and success. However, warehouses are exposed to various risks of failure that affect their activity. This paper presents a mixed-integer non-linear mathematical model that can be used to determine optimal warehouse locations and sizes, which warehouses to fortify, and which branches should be assigned to specific warehouses when there is a risk of warehouse failure. Every branch is assigned to a fortified primary warehouse or a nonfortified primary warehouse and a fortified backup warehouse. The standard method and an introduced method, based on the average probabilities, for linearizing this mathematical model were used. A Canadian case study was used to demonstrate the developed mathematical model, followed by some sensitivity analysis.

Keywords: supply chain network design, fortified warehouse, mixed-integer mathematical model, warehouse failure risk

Procedia PDF Downloads 245
2234 Dual Band Antenna Design with Compact Radiator for 2.5/5.2/5.8 Ghz Wlan Application Using Genetic Algorithm

Authors: Ramnath Narhete, Saket Pandey, Puran Gour

Abstract:

This paper presents of dual-band planner antenna with a compact radiator for 2.4/5.2/5.8 proposed by optimizing its resonant frequency, Bandwidth of operation and radiation frequency using the genetic algorithm. The antenna consists L-shaped and E-shaped radiating element to generate two resonant modes for dual band operation. The above techniques have been successfully used in many applications. Dual band antenna with the compact radiator for 2.4/5.2/5.8 GHz WLAN application design and radiator size only width 8mm and a length is 11.3 mm. The antenna can we used for various application in the field of communication. Genetic algorithm will be used to design the antenna and impedance matching network.

Keywords: genetic algorithm, dual-band E, dual-band L, WLAN, compact radiator

Procedia PDF Downloads 582
2233 Effect of Classroom Acoustic Factors on Language and Cognition in Bilinguals and Children with Mild to Moderate Hearing Loss

Authors: Douglas MacCutcheon, Florian Pausch, Robert Ljung, Lorna Halliday, Stuart Rosen

Abstract:

Contemporary classrooms are increasingly inclusive of children with mild to moderate disabilities and children from different language backgrounds (bilinguals, multilinguals), but classroom environments and standards have not yet been adapted adequately to meet these challenges brought about by this inclusivity. Additionally, classrooms are becoming noisier as a learner-centered as opposed to teacher-centered teaching paradigm is adopted, which prioritizes group work and peer-to-peer learning. Challenging listening conditions with distracting sound sources and background noise are known to have potentially negative effects on children, particularly those that are prone to struggle with speech perception in noise. Therefore, this research investigates two groups vulnerable to these environmental effects, namely children with a mild to moderate hearing loss (MMHLs) and sequential bilinguals learning in their second language. In the MMHL study, this group was assessed on speech-in-noise perception, and a number of receptive language and cognitive measures (auditory working memory, auditory attention) and correlations were evaluated. Speech reception thresholds were found to be predictive of language and cognitive ability, and the nature of correlations is discussed. In the bilinguals study, sequential bilingual children’s listening comprehension, speech-in-noise perception, listening effort and release from masking was evaluated under a number of different ecologically valid acoustic scenarios in order to pinpoint the extent of the ‘native language benefit’ for Swedish children learning in English, their second language. Scene manipulations included target-to-distractor ratios and introducing spatially separated noise. This research will contribute to the body of findings from which educational institutions can draw when designing or adapting educational environments in inclusive schools.

Keywords: sequential bilinguals, classroom acoustics, mild to moderate hearing loss, speech-in-noise, release from masking

Procedia PDF Downloads 329
2232 Talking Back to Hollywood: Museum Representation in Popular Culture as a Gateway to Understanding Public Perception

Authors: Jessica BrodeFrank, Beka Bryer, Lacey Wilson, Sierra Van Ryck deGroot

Abstract:

Museums are enjoying quite the moment in pop culture. From discussions of labor in Bob’s Burger to introducing cultural repatriation in The Black Panther, discussions of various museum issues are making their way to popular media. “Talking Back to Hollywood” analyzes the impact museums have on movies and television. The paper will highlight a series of cultural cameos and discuss what each reveals about critical themes in museums: repatriation, labor, obfuscated histories, institutional legacies, artificial intelligence, and holograms. Using a mixed methods approach to include surveys, descriptive research, thematic analysis, and context analysis, the authors of this paper will explore how we, as the museum staff, might begin to cite museums and movies together as texts. Drawing from their experience working in museums and public history, this contingent of mid-career professionals will highlight the impact museums have had on movies and television and the didactic lessons these portrayals can provide back to cultural heritage professionals. From tackling critical themes in museums such as repatriation, labor conditions/inequities, obfuscated histories, curatorial choice and control, institutional legacies, and more, this paper is grounded in the cultural zeitgeist of the 2000s and the message these media portrayals send to the public and the cultural heritage sector. In particular, the paper will examine how portrayals of AI, holograms, and more technology can be used as entry points for necessary discussions with the public on mistrust, misinformation, and emerging technologies. This paper will not only expose the legacy and cultural understanding of the museum field within popular culture but also will discuss actionable ways that public historians can use these portrayals as an entry point for discussions with the public, citing literature reviews and quantitative and qualitative analysis of survey results. As Hollywood is talking about museums, museums can use that to better connect to the audiences who feel comfortable at the cinema but are excluded from the museum.

Keywords: museums, public memory, representation, popular culture

Procedia PDF Downloads 86
2231 Interferometric Demodulation Scheme Using a Mode-Locker Fiber Laser

Authors: Liang Zhang, Yuanfu Lu, Yuming Dong, Guohua Jiao, Wei Chen, Jiancheng Lv

Abstract:

We demonstrated an interferometric demodulation scheme using a mode-locked fiber laser. The mode-locked fiber laser is launched into a two-beam interferometer. When the ratio between the fiber path imbalance of interferometer and the laser cavity length is close to an integer, an interferometric fringe emerges as a result of vernier effect, and then the phase shift of the interferometer can be demodulated. The mode-locked fiber laser provides a large bandwidth and reduces the cost for wavelength division multiplexion (WDM). The proposed interferometric demodulation scheme can be further applied in multi-point sensing system such as fiber optics hydrophone array, seismic wave detection network with high sensitivity and low cost.

Keywords: fiber sensing, interferometric demodulation, mode-locked fiber laser, vernier effect

Procedia PDF Downloads 334
2230 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost

Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku

Abstract:

Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.

Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost

Procedia PDF Downloads 113
2229 Synthesis and Electromagnetic Property of Li₀.₃₅Zn₀.₃Fe₂.₃₅O₄ Grafted with Polyaniline Fibers

Authors: Jintang Zhou, Zhengjun Yao, Tiantian Yao

Abstract:

Li₀.₃₅Zn₀.₃Fe₂.₃₅O₄(LZFO) grafted with polyaniline (PANI) fibers was synthesized by in situ polymerization. FTIR, XRD, SEM, and vector network analyzer were used to investigate chemical composition, micro-morphology, electromagnetic properties and microwave absorbing properties of the composite. The results show that PANI fibers were grafted on the surfaces of LZFO particles. The reflection loss exceeds 10 dB in the frequency range from 2.5 to 5 GHz and from 15 to 17GHz, and the maximum reflection loss reaches -33 dB at 15.9GHz. The enhanced microwave absorption properties of LZFO/PANI-fiber composites are mainly ascribed to the combined effect of both dielectric loss and magnetic loss and the improved impedance matching.

Keywords: Li₀.₃₅Zn₀.₃Fe₂.₃₅O₄, polyaniline, electromagnetic properties, microwave absorbing properties

Procedia PDF Downloads 434
2228 Electric Propulsion System Development for High Floor Trolley Bus

Authors: Asep Andi Suryandi, Katri Yulianto, Dewi Rianti Mandasari

Abstract:

The development of environmentally friendly vehicles increasingly attracted the attention of almost all countries in the world, including Indonesia. There are various types of environmentally friendly vehicles, such as: electric vehicles, hybrid, and fuel gas. The Electric vehicle has been developed in Indonesia, a private or public vehicle. But many electric vehicles had been developed using the battery as a power source, while the battery technology for electric vehicles still constraints in capacity, dimensions of the battery itself and charging system. Trolley bus is one of the electric buses with the main power source of the network catenary / overhead line with trolley pole as the point of contact. This paper will discuss the design and manufacture electrical system in Trolleybus.

Keywords: trolley bus, electric propulsion system, design, manufacture, electric vehicle

Procedia PDF Downloads 360
2227 On the Use of Analytical Performance Models to Design a High-Performance Active Queue Management Scheme

Authors: Shahram Jamali, Samira Hamed

Abstract:

One of the open issues in Random Early Detection (RED) algorithm is how to set its parameters to reach high performance for the dynamic conditions of the network. Although original RED uses fixed values for its parameters, this paper follows a model-based approach to upgrade performance of the RED algorithm. It models the routers queue behavior by using the Markov model and uses this model to predict future conditions of the queue. This prediction helps the proposed algorithm to make some tunings over RED's parameters and provide efficiency and better performance. Widespread packet level simulations confirm that the proposed algorithm, called Markov-RED, outperforms RED and FARED in terms of queue stability, bottleneck utilization and dropped packets count.

Keywords: active queue management, RED, Markov model, random early detection algorithm

Procedia PDF Downloads 542
2226 Survival Analysis after a First Ischaemic Stroke Event: A Case-Control Study in the Adult Population of England.

Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski

Abstract:

Stroke is associated with a significant risk of morbidity and mortality. There is scarcity of research on the long-term survival after first-ever ischaemic stroke (IS) events in England with regards to effects of different medical therapies and comorbidities. The objective of this study was to model the all-cause mortality after an IS diagnosis in the adult population of England. Using a retrospective case-control design, we extracted the electronic medical records of patients born prior to or in year 1960 in England with a first-ever ischaemic stroke diagnosis from January 1986 to January 2017 within the Health and Improvement Network (THIN) database. Participants with a history of ischaemic stroke were matched to 3 controls by sex and age at diagnosis and general practice. The primary outcome was the all-cause mortality. The hazards of the all-cause mortality were estimated using a Weibull-Cox survival model which included both scale and shape effects and a shared random effect of general practice. The model included sex, birth cohort, socio-economic status, comorbidities and medical therapies. 20,250 patients with a history of IS (cases) and 55,519 controls were followed up to 30 years. From 2008 to 2015, the one-year all-cause mortality for the IS patients declined with an absolute change of -0.5%. Preventive treatments to cases increased considerably over time. These included prescriptions of statins and antihypertensives. However, prescriptions for antiplatelet drugs decreased in the routine general practice since 2010. The survival model revealed a survival benefit of antiplatelet treatment to stroke survivors with hazard ratio (HR) of 0.92 (0.90 – 0.94). IS diagnosis had significant interactions with gender and age at entry and hypertension diagnosis. IS diagnosis was associated with high risk of all-cause mortality with HR= 3.39 (3.05-3.72) for cases compared to controls. Hypertension was associated with poor survival with HR = 4.79 (4.49 - 5.09) for hypertensive cases relative to non-hypertensive controls, though the detrimental effect of hypertension has not reached significance for hypertensive controls, HR = 1.19(0.82-1.56). This study of English primary care data showed that between 2008 and 2015, the rates of prescriptions of stroke preventive treatments increased, and a short-term all-cause mortality after IS stroke declined. However, stroke resulted in poor long-term survival. Hypertension, a modifiable risk factor, was found to be associated with poor survival outcomes in IS patients. Antiplatelet drugs were found to be protective to survival. Better efforts are required to reduce the burden of stroke through health service development and primary prevention.

Keywords: general practice, hazard ratio, health improvement network (THIN), ischaemic stroke, multiple imputation, Weibull-Cox model.

Procedia PDF Downloads 189
2225 Cognitive Emotion Regulation Strategies in 9–14-Year-Old Hungarian Children with Neurotypical Development in the Light of the Hungarian Version of Cognitive Emotion Regulation Questionnaire for Children

Authors: Dorottya Horváth, Andras Lang, Diana Varro-Horvath

Abstract:

This research activity and study is part of a major research effort to gain an integrative, neuropsychological, and personality psychological understanding of Attention Deficit Hyperactivity Disorder (ADHD) and thus improve the specification of diagnostic and therapeutic care. In the past, the neuropsychology section has investigated working memory, executive function, attention, and behavioural manifestations in children. Currently, we are looking for personality psychological protective factors for ADHD and its symptomatic exacerbation. We hypothesise that secure attachment, adaptive emotion regulation, and high resilience are protective factors. The aim of this study is to measure and report the results of a Hungarian sample of the Cognitive Emotion Regulation Questionnaire for Children (CERQ-k) because before studying groups with different developmental differences, it is essential to know the average scores of groups with neurotypical devel-opment. Until now, there was no Hungarian version of the above test, so we used our own translation. This questionnaire has been developed to assess children's thoughts after experiencing negative life events. It consists of 4-4 items per subscale, for a total of 36 items. The response categories for each item range from 1 (almost never) to 5 (almost always). The subscales were self-blame, blaming others, acceptance, planning, positive refocusing, rumination or thought-focusing, positive reappraisal, putting into perspective, and catastrophizing. The data for this study were collected from 120 children aged 9-14 years. It was analysed using descriptive statistical analysis, where the mean and standard deviation values for each age group, as well as the Cronbach's alpha value, were significant in testing the reliability of the questionnaire. The results showed that the questionnaire is a reliable and valid measuring instrument also on a Hungarian sample. These developments and results will allow the use of a version of the Cognitive Emotion Regulation Questionnaire for children in Hungarian and pave the way for the study of different developmental groups such as children with learning disabilities and/or with ADHD.

Keywords: neurotypical development, emotion regulation, negative life events, CERQ-k, Hungarian average scores

Procedia PDF Downloads 78
2224 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 259
2223 Mechanical Properties and Microstructure of Ultra-High Performance Concrete Containing Fly Ash and Silica Fume

Authors: Jisong Zhang, Yinghua Zhao

Abstract:

The present study investigated the mechanical properties and microstructure of Ultra-High Performance Concrete (UHPC) containing supplementary cementitious materials (SCMs), such as fly ash (FA) and silica fume (SF), and to verify the synergistic effect in the ternary system. On the basis of 30% fly ash replacement, the incorporation of either 10% SF or 20% SF show a better performance compared to the reference sample. The efficiency factor (k-value) was calculated as a synergistic effect to predict the compressive strength of UHPC with these SCMs. The SEM of micrographs and pore volume from BJH method indicate a high correlation with compressive strength. Further, an artificial neural networks model was constructed for prediction of the compressive strength of UHPC containing these SCMs.

Keywords: artificial neural network, fly ash, mechanical properties, ultra-high performance concrete

Procedia PDF Downloads 417
2222 Firm's Growth Leading Dimensions of Blockchain Empowered Information Management System: An Empirical Study

Authors: Umang Varshney, Amit Karamchandani, Rohit Kapoor

Abstract:

Practitioners and researchers have realized that Blockchain is not limited to currency. Blockchain as a distributed ledger can ensure a transparent and traceable supply chain. Due to Blockchain-enabled IoTs, a firm’s information management system can now take inputs from other supply chain partners in real-time. This study aims to provide empirical evidence of dimensions responsible for blockchain implemented firm’s growth and highlight how sector (manufacturing or service), state's regulatory environment, and choice of blockchain network affect the blockchain's usefulness. This post-adoption study seeks to validate the findings of pre-adoption studies done on the blockchain. Data will be collected through a survey of managers working in blockchain implemented firms and analyzed through PLS-SEM.

Keywords: blockchain, information management system, PLS-SEM, firm's growth

Procedia PDF Downloads 127
2221 Research on the Updating Strategy of Public Space in Small Towns in Zhejiang Province under the Background of New-Style Urbanization

Authors: Chen Yao, Wang Ke

Abstract:

Small towns are the most basic administrative institutions in our country, which are connected with cities and rural areas. Small towns play an important role in promoting local urban and rural economic development, providing the main public services and maintaining social stability in social governance. With the vigorous development of small towns and the transformation of industrial structure, the changes of social structure, spatial structure, and lifestyle are lagging behind, causing that the spatial form and landscape style do not belong to both cities and rural areas, and seriously affecting the quality of people’s life space and environment. The rural economy in Zhejiang Province has started, the society and the population are also developing in relative stability. In September 2016, Zhejiang Province set out the 'Technical Guidelines for Comprehensive Environmental Remediation of Small Towns in Zhejiang Province,' so as to comprehensively implement the small town comprehensive environmental remediation with the main content of strengthening the plan and design leading, regulating environmental sanitation, urban order and town appearance. In November 2016, Huzhou City started the comprehensive environmental improvement of small towns, strived to use three years to significantly improve the 115 small towns, as well as to create a number of high quality, distinctive and beautiful towns with features of 'clean and livable, rational layout, industrial development, poetry and painting style'. This paper takes Meixi Town, Zhangwu Town and Sanchuan Village in Huzhou City as the empirical cases, analyzes the small town public space by applying the relative theory of actor-network and space syntax. This paper also analyzes the spatial composition in actor and social structure elements, as well as explores the relationship of actor’s spatial practice and public open space by combining with actor-network theory. This paper introduces the relevant theories and methods of spatial syntax, carries out research analysis and design planning analysis of small town spaces from the perspective of quantitative analysis. And then, this paper proposes the effective updating strategy for the existing problems in public space. Through the planning and design in the building level, the dissonant factors produced by various spatial combination of factors and between landscape design and urban texture during small town development will be solved, inhabitant quality of life will be promoted, and town development vitality will be increased.

Keywords: small towns, urbanization, public space, updating

Procedia PDF Downloads 231
2220 Adaptive Data Approximations Codec (ADAC) for AI/ML-based Cyber-Physical Systems

Authors: Yong-Kyu Jung

Abstract:

The fast growth in information technology has led to de-mands to access/process data. CPSs heavily depend on the time of hardware/software operations and communication over the network (i.e., real-time/parallel operations in CPSs (e.g., autonomous vehicles). Since data processing is an im-portant means to overcome the issue confronting data management, reducing the gap between the technological-growth and the data-complexity and channel-bandwidth. An adaptive perpetual data approximation method is intro-duced to manage the actual entropy of the digital spectrum. An ADAC implemented as an accelerator and/or apps for servers/smart-connected devices adaptively rescales digital contents (avg.62.8%), data processing/access time/energy, encryption/decryption overheads in AI/ML applications (facial ID/recognition).

Keywords: adaptive codec, AI, ML, HPC, cyber-physical, cybersecurity

Procedia PDF Downloads 81
2219 The Management Information System for Convenience Stores: Case Study in 7 Eleven Shop in Bangkok

Authors: Supattra Kanchanopast

Abstract:

The purpose of this research is to develop and design a management information system for 7 eleven shop in Bangkok. The system was designed and developed to meet users’ requirements via the internet network by use of application software such as My SQL for database management, Apache HTTP Server for Web Server and PHP Hypertext Preprocessor for an interface between web server, database and users. The system was designed into two subsystems as the main system, or system for head office, and the branch system for branch shops. These consisted of three parts which are classified by user management as shop management, inventory management and Point of Sale (POS) management. The implementation of the MIS for the mini-mart shop, can lessen the amount of paperwork and reduce repeating tasks so it may decrease the capital of the business and support an extension of branches in the future as well.

Keywords: convenience store, the management information system, inventory management, 7 eleven shop

Procedia PDF Downloads 487
2218 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 188
2217 Hypolipidemic and Antioxidant Effects of Mycelial Polysaccharides from Calocybe indica in Hyperlipidemic Rats Induced by High-Fat Diet

Authors: Govindan Sudha, Mathumitha Subramaniam, Alamelu Govindasamy, Sasikala Gunasekaran

Abstract:

The aim of this study was to investigate the protective effect of Hypsizygus ulmarius polysaccharides (HUP) on reducing oxidative stress, cognitive impairment and neurotoxicity in D-galactose induced aging mice. Mice were subcutaneously injected with D-galactose (150 mg/kg per day) for 6 weeks and were administered HUP simultaneously. Aged mice receiving vitamin E (100 mg/kg) served as positive control. Chronic administration of D-galactose significantly impaired cognitive performance oxidative defence and mitochondrial enzymes activities as compared to control group. The results showed that HUP (200 and 400 mg/kg) treatment significantly improved the learning and memory ability in Morris water maze test. Biochemical examination revealed that HUP significantly increased the decreased activities of superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPx), glutathione reductase (GR), glutathione-S-transferase (GST), mitochondrial enzymes-NADH dehydrogenase, malate dehydrogenase (MDH), isocitrate dehydrogenase (ICDH), Na+K+, Ca2+, Mg2+ATPase activities, elevated the lowered total anti-oxidation capability (TAOC), glutathione (GSH), vitamin C and decreased the raised acetylcholinesterase (AChE) activities, malondialdehyde (MDA), hydroperoxide (HPO), protein carbonyls (PCO), advanced oxidation protein products (AOPP) levels in brain of aging mice induced by D-gal in a dose-dependent manner. In conclusion, present study highlights the potential role of HUP against D-galactose induced cognitive impairment, biochemical and mitochondrial dysfunction in mice. In vitro studies on the effect of HUP on scavenging DPPH, ABTS, DMPD, OH radicals, reducing power, B-carotene bleaching and lipid peroxidation inhibition confirmed the free radical scavenging and antioxidant activity of HUP. The results suggest that HUP possesses anti-aging efficacy and may have potential in treatment of neurodegenerative diseases.

Keywords: aging, antioxidants, mushroom, neurotoxicity

Procedia PDF Downloads 532
2216 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 142
2215 ANN Modeling for Cadmium Biosorption from Potable Water Using a Packed-Bed Column Process

Authors: Dariush Jafari, Seyed Ali Jafari

Abstract:

The recommended limit for cadmium concentration in potable water is less than 0.005 mg/L. A continuous biosorption process using indigenous red seaweed, Gracilaria corticata, was performed to remove cadmium from the potable water. The process was conducted under fixed conditions and the breakthrough curves were achieved for three consecutive sorption-desorption cycles. A modeling based on Artificial Neural Network (ANN) was employed to fit the experimental breakthrough data. In addition, a simplified semi empirical model, Thomas, was employed for this purpose. It was found that ANN well described the experimental data (R2>0.99) while the Thomas prediction were a bit less successful with R2>0.97. The adjusted design parameters using the nonlinear form of Thomas model was in a good agreement with the experimentally obtained ones. The results approve the capability of ANN to predict the cadmium concentration in potable water.

Keywords: ANN, biosorption, cadmium, packed-bed, potable water

Procedia PDF Downloads 434
2214 Aerobic Bioprocess Control Using Artificial Intelligence Techniques

Authors: M. Caramihai, Irina Severin

Abstract:

This paper deals with the design of an intelligent control structure for a bioprocess of Hansenula polymorpha yeast cultivation. The objective of the process control is to produce biomass in a desired physiological state. The work demonstrates that the designed Hybrid Control Techniques (HCT) are able to recognize specific evolution bioprocess trajectories using neural networks trained specifically for this purpose, in order to estimate the model parameters and to adjust the overall bioprocess evolution through an expert system and a fuzzy structure. The design of the control algorithm as well as its tuning through realistic simulations is presented. Taking into consideration the synergism of different paradigms like fuzzy logic, neural network, and symbolic artificial intelligence (AI), in this paper we present a real and fulfilled intelligent control architecture with application in bioprocess control.

Keywords: bioprocess, intelligent control, neural nets, fuzzy structure, hybrid techniques

Procedia PDF Downloads 425
2213 Mathematical Modeling and Algorithms for the Capacitated Facility Location and Allocation Problem with Emission Restriction

Authors: Sagar Hedaoo, Fazle Baki, Ahmed Azab

Abstract:

In supply chain management, network design for scalable manufacturing facilities is an emerging field of research. Facility location allocation assigns facilities to customers to optimize the overall cost of the supply chain. To further optimize the costs, capacities of these facilities can be changed in accordance with customer demands. A mathematical model is formulated to fully express the problem at hand and to solve small-to-mid range instances. A dedicated constraint has been developed to restrict emissions in line with the Kyoto protocol. This problem is NP-Hard; hence, a simulated annealing metaheuristic has been developed to solve larger instances. A case study on the USA-Canada cross border crossing is used.

Keywords: emission, mixed integer linear programming, metaheuristic, simulated annealing

Procedia PDF Downloads 312
2212 Pollutant Dispersion in Coastal Waters

Authors: Sonia Ben Hamza, Sabra Habli, Nejla Mahjoub Saïd, Hervé Bournot, Georges Le Palec

Abstract:

This paper spots light on the effect of a point source pollution on streams, stemming out from intentional release caused by unconscious facts. The consequences of such contamination on ecosystems are very serious. Accordingly, effective tools are highly demanded in this respect, which enable us to come across an accurate progress of pollutant and anticipate different measures to be applied in order to limit the degradation of the environmental surrounding. In this context, we are eager to model a pollutant dispersion of a free surface flow which is ejected by an outfall sewer of an urban sewerage network in coastal water taking into account the influence of climatic parameters on the spread of pollutant. Numerical results showed that pollutant dispersion is merely due to the presence of vortices and turbulence. Hence, it was realized that the pollutant spread in seawater is strongly correlated with climatic conditions in this region.

Keywords: coastal waters, numerical simulation, pollutant dispersion, turbulent flows

Procedia PDF Downloads 515
2211 Black-Box-Base Generic Perturbation Generation Method under Salient Graphs

Authors: Dingyang Hu, Dan Liu

Abstract:

DNN (Deep Neural Network) deep learning models are widely used in classification, prediction, and other task scenarios. To address the difficulties of generic adversarial perturbation generation for deep learning models under black-box conditions, a generic adversarial ingestion generation method based on a saliency map (CJsp) is proposed to obtain salient image regions by counting the factors that influence the input features of an image on the output results. This method can be understood as a saliency map attack algorithm to obtain false classification results by reducing the weights of salient feature points. Experiments also demonstrate that this method can obtain a high success rate of migration attacks and is a batch adversarial sample generation method.

Keywords: adversarial sample, gradient, probability, black box

Procedia PDF Downloads 107
2210 Three Issues for Integrating Artificial Intelligence into Legal Reasoning

Authors: Fausto Morais

Abstract:

Artificial intelligence has been widely used in law. Programs are able to classify suits, to identify decision-making patterns, to predict outcomes, and to formalize legal arguments as well. In Brazil, the artificial intelligence victor has been classifying cases to supreme court’s standards. When those programs act doing those tasks, they simulate some kind of legal decision and legal arguments, raising doubts about how artificial intelligence can be integrated into legal reasoning. Taking this into account, the following three issues are identified; the problem of hypernormatization, the argument of legal anthropocentrism, and the artificial legal principles. Hypernormatization can be seen in the Brazilian legal context in the Supreme Court’s usage of the Victor program. This program generated efficiency and consistency. On the other hand, there is a feasible risk of over standardizing factual and normative legal features. Then legal clerks and programmers should work together to develop an adequate way to model legal language into computational code. If this is possible, intelligent programs may enact legal decisions in easy cases automatically cases, and, in this picture, the legal anthropocentrism argument takes place. Such an argument argues that just humans beings should enact legal decisions. This is so because human beings have a conscience, free will, and self unity. In spite of that, it is possible to argue against the anthropocentrism argument and to show how intelligent programs may work overcoming human beings' problems like misleading cognition, emotions, and lack of memory. In this way, intelligent machines could be able to pass legal decisions automatically by classification, as Victor in Brazil does, because they are binding by legal patterns and should not deviate from them. Notwithstanding, artificial intelligent programs can be helpful beyond easy cases. In hard cases, they are able to identify legal standards and legal arguments by using machine learning. For that, a dataset of legal decisions regarding a particular matter must be available, which is a reality in Brazilian Judiciary. Doing such procedure, artificial intelligent programs can support a human decision in hard cases, providing legal standards and arguments based on empirical evidence. Those legal features claim an argumentative weight in legal reasoning and should serve as references for judges when they must decide to maintain or overcome a legal standard.

Keywords: artificial intelligence, artificial legal principles, hypernormatization, legal anthropocentrism argument, legal reasoning

Procedia PDF Downloads 148
2209 Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features

Authors: Tharini N. de Silva, Xiao Zhibo, Zhao Rui, Mao Kezhi

Abstract:

Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks.

Keywords: causal realtion extraction, relation extracton, convolutional neural network, text representation

Procedia PDF Downloads 740