Search results for: whale optimization algorithm
724 Lead-Free Inorganic Cesium Tin-Germanium Triiodide Perovskites for Photovoltaic Application
Authors: Seyedeh Mozhgan Seyed-Talebi, Javad Beheshtian
Abstract:
The toxicity of lead associated with the lifecycle of perovskite solar cells (PSCs( is a serious concern which may prove to be a major hurdle in the path toward their commercialization. The current proposed lead-free PSCs including Ag(I), Bi(III), Sb(III), Ti(IV), Ge(II), and Sn(II) low-toxicity cations are still plagued with the critical issues of poor stability and low efficiency. This is mainly because of their chemical stability. In the present research, utilization of all inorganic CsSnGeI3 based materials offers the advantages to enhance resistance of device to degradation, reduce the cost of cells, and minimize the carrier recombination. The presence of inorganic halide perovskite improves the photovoltaic parameters of PCSs via improved surface coverage and stability. The inverted structure of simulated devices using a 1D simulator like solar cell capacitance simulator (SCAPS) version 3308 involves TCOHTL/Perovskite/ETL/Au contact layer. PEDOT:PSS, PCBM, and CsSnGeI3 used as hole transporting layer (HTL), electron transporting layer (ETL), and perovskite absorber layer in the inverted structure for the first time. The holes are injected from highly stable and air tolerant Sn0.5Ge0.5I3 perovskite composition to HTM and electrons from the perovskite to ETL. Simulation results revealed a great dependence of power conversion efficiency (PCE) on the thickness and defect density of perovskite layer. Here the effect of an increase in operating temperature from 300 K to 400 K on the performance of CsSnGeI3 based perovskite devices is investigated. Comparison between simulated CsSnGeI3 based PCSs and similar real testified devices with spiro-OMeTAD as HTL showed that the extraction of carriers at the interfaces of perovskite absorber depends on the energy level mismatches between perovskite and HTL/ETL. We believe that optimization results reported here represent a critical avenue for fabricating the stable, low-cost, efficient, and eco-friendly all-inorganic Cs-Sn-Ge based lead-free perovskite devices.Keywords: hole transporting layer, lead-free, perovskite solar cell, SCAPS-1D, Sn-Ge based
Procedia PDF Downloads 155723 Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis
Authors: An Chengrui, Yin Zi, Wu Bingbing, Ma Yuanzhu, Jin Kaixiu, Chen Xiao, Ouyang Hongwei
Abstract:
Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.Keywords: Single cell RNA sequence, Similarity measurement, Relative Entropy, KL-SNE, t-SNE
Procedia PDF Downloads 340722 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 12721 In-silico DFT Study, Molecular Docking, ADMET Predictions, and DMS of Isoxazolidine and Isoxazoline Analogs with Anticancer Properties
Authors: Moulay Driss Mellaoui, Khadija Zaki, Khalid Abbiche, Abdallah Imjjad, Rachid Boutiddar, Abdelouahid Sbai, Aaziz Jmiai, Souad El Issami, Al Mokhtar Lamsabhi, Hanane Zejli
Abstract:
This study presents a comprehensive analysis of six isoxazolidine and isoxazoline derivatives, leveraging a multifaceted approach that combines Density Functional Theory (DFT), AdmetSAR analysis, and molecular docking simulations to explore their electronic, pharmacokinetic, and anticancer properties. Through DFT analysis, using the B3LYP-D3BJ functional and the 6-311++G(d,p) basis set, we optimized molecular geometries, analyzed vibrational frequencies, and mapped Molecular Electrostatic Potentials (MEP), identifying key sites for electrophilic attacks and hydrogen bonding. Frontier Molecular Orbital (FMO) analysis and Density of States (DOS) plots revealed varying stability levels among the compounds, with 1b, 2b, and 3b showing slightly higher stability. Chemical potential assessments indicated differences in binding affinities, suggesting stronger potential interactions for compounds 1b and 2b. AdmetSAR analysis predicted favorable human intestinal absorption (HIA) rates for all compounds, highlighting compound 3b superior oral effectiveness. Molecular docking and molecular dynamics simulations were conducted on isoxazolidine and 4-isoxazoline derivatives targeting the EGFR receptor (PDB: 1JU6). Molecular docking simulations confirmed the high affinity of these compounds towards the target protein 1JU6, particularly compound 3b, among the isoxazolidine derivatives, compound 3b exhibited the most favorable binding energy, with a g score of -8.50 kcal/mol. Molecular dynamics simulations over 100 nanoseconds demonstrated the stability and potential of compound 3b as a superior candidate for anticancer applications, further supported by structural analyses including RMSD, RMSF, Rg, and SASA values. This study underscores the promising role of compound 3b in anticancer treatments, providing a solid foundation for future drug development and optimization efforts.Keywords: isoxazolines, DFT, molecular docking, molecular dynamic, ADMET, drugs.
Procedia PDF Downloads 47720 Use of Multivariate Statistical Techniques for Water Quality Monitoring Network Assessment, Case of Study: Jequetepeque River Basin
Authors: Jose Flores, Nadia Gamboa
Abstract:
A proper water quality management requires the establishment of a monitoring network. Therefore, evaluation of the efficiency of water quality monitoring networks is needed to ensure high-quality data collection of critical quality chemical parameters. Unfortunately, in some Latin American countries water quality monitoring programs are not sustainable in terms of recording historical data or environmentally representative sites wasting time, money and valuable information. In this study, multivariate statistical techniques, such as principal components analysis (PCA) and hierarchical cluster analysis (HCA), are applied for identifying the most significant monitoring sites as well as critical water quality parameters in the monitoring network of the Jequetepeque River basin, in northern Peru. The Jequetepeque River basin, like others in Peru, shows socio-environmental conflicts due to economical activities developed in this area. Water pollution by trace elements in the upper part of the basin is mainly related with mining activity, and agricultural land lost due to salinization is caused by the extensive use of groundwater in the lower part of the basin. Since the 1980s, the water quality in the basin has been non-continuously assessed by public and private organizations, and recently the National Water Authority had established permanent water quality networks in 45 basins in Peru. Despite many countries use multivariate statistical techniques for assessing water quality monitoring networks, those instruments have never been applied for that purpose in Peru. For this reason, the main contribution of this study is to demonstrate that application of the multivariate statistical techniques could serve as an instrument that allows the optimization of monitoring networks using least number of monitoring sites as well as the most significant water quality parameters, which would reduce costs concerns and improve the water quality management in Peru. Main socio-economical activities developed and the principal stakeholders related to the water management in the basin are also identified. Finally, water quality management programs will also be discussed in terms of their efficiency and sustainability.Keywords: PCA, HCA, Jequetepeque, multivariate statistical
Procedia PDF Downloads 355719 Multi-Objective Multi-Period Allocation of Temporary Earthquake Disaster Response Facilities with Multi-Commodities
Authors: Abolghasem Yousefi-Babadi, Ali Bozorgi-Amiri, Aida Kazempour, Reza Tavakkoli-Moghaddam, Maryam Irani
Abstract:
All over the world, natural disasters (e.g., earthquakes, floods, volcanoes and hurricanes) causes a lot of deaths. Earthquakes are introduced as catastrophic events, which is accident by unusual phenomena leading to much loss around the world. Such could be replaced by disasters or any other synonyms strongly demand great long-term help and relief, which can be hard to be managed. Supplies and facilities are very important challenges after any earthquake which should be prepared for the disaster regions to satisfy the people's demands who are suffering from earthquake. This paper proposed disaster response facility allocation problem for disaster relief operations as a mathematical programming model. Not only damaged people in the earthquake victims, need the consumable commodities (e.g., food and water), but also they need non-consumable commodities (e.g., clothes) to protect themselves. Therefore, it is concluded that paying attention to disaster points and people's demands are very necessary. To deal with this objective, both commodities including consumable and need non-consumable commodities are considered in the presented model. This paper presented the multi-objective multi-period mathematical programming model regarding the minimizing the average of the weighted response times and minimizing the total operational cost and penalty costs of unmet demand and unused commodities simultaneously. Furthermore, a Chebycheff multi-objective solution procedure as a powerful solution algorithm is applied to solve the proposed model. Finally, to illustrate the model applicability, a case study of the Tehran earthquake is studied, also to show model validation a sensitivity analysis is carried out.Keywords: facility location, multi-objective model, disaster response, commodity
Procedia PDF Downloads 257718 Malate Dehydrogenase Enabled ZnO Nanowires as an Optical Tool for Malic Acid Detection in Horticultural Products
Authors: Rana Tabassum, Ravi Kant, Banshi D. Gupta
Abstract:
Malic acid is an extensively distributed organic acid in numerous horticultural products in minute amounts which significantly contributes towards taste determination by balancing sugar and acid fractions. An enhanced concentration of malic acid is utilized as an indicator of fruit maturity. In addition, malic acid is also a crucial constituent of several cosmetics and pharmaceutical products. An efficient detection and quantification protocol for malic acid is thus highly demanded. In this study, we report a novel detection scheme for malic acid by synergistically collaborating fiber optic surface plasmon resonance (FOSPR) and distinctive features of nanomaterials favorable for sensing applications. The design blueprint involves the deposition of an assembly of malate dehydrogenase enzyme entrapped in ZnO nanowires forming the sensing route over silver coated central unclad core region of an optical fiber. The formation and subsequent decomposition of the enzyme-analyte complex on exposure of the sensing layer to malic acid solutions of diverse concentration results in modification of the dielectric function of the sensing layer which is manifested in terms of shift in resonance wavelength. Optimization of experimental variables such as enzyme concentration entrapped in ZnO nanowires, dip time of probe for deposition of sensing layer and working pH range of the sensing probe have been accomplished through SPR measurements. The optimized sensing probe displays high sensitivity, broad working range and a minimum limit of detection value and has been successfully tested for malic acid determination in real samples of fruit juices. The current work presents a novel perspective towards malic acid determination as the unique and cooperative combination of FOSPR and nanomaterials provides myriad advantages such as enhanced sensitivity, specificity, compactness together with the possibility of online monitoring and remote sensing.Keywords: surface plasmon resonance, optical fiber, sensor, malic acid
Procedia PDF Downloads 380717 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections
Authors: Anthony D. Rhodes, Manan Goel
Abstract:
We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.Keywords: computer vision, object segmentation, interactive segmentation, model compression
Procedia PDF Downloads 120716 A Step Magnitude Haptic Feedback Device and Platform for Better Way to Review Kinesthetic Vibrotactile 3D Design in Professional Training
Authors: Biki Sarmah, Priyanko Raj Mudiar
Abstract:
In the modern world of remotely interactive virtual reality-based learning and teaching, including professional skill-building training and acquisition practices, as well as data acquisition and robotic systems, the revolutionary application or implementation of field-programmable neurostimulator aids and first-hand interactive sensitisation techniques into 3D holographic audio-visual platforms have been a coveted dream of many scholars, professionals, scientists, and students. Integration of 'kinaesthetic vibrotactile haptic perception' along with an actuated step magnitude contact profiloscopy in augmented reality-based learning platforms and professional training can be implemented by using an extremely calculated and well-coordinated image telemetry including remote data mining and control technique. A real-time, computer-aided (PLC-SCADA) field calibration based algorithm must be designed for the purpose. But most importantly, in order to actually realise, as well as to 'interact' with some 3D holographic models displayed over a remote screen using remote laser image telemetry and control, all spatio-physical parameters like cardinal alignment, gyroscopic compensation, as well as surface profile and thermal compositions, must be implemented using zero-order type 1 actuators (or transducers) because they provide zero hystereses, zero backlashes, low deadtime as well as providing a linear, absolutely controllable, intrinsically observable and smooth performance with the least amount of error compensation while ensuring the best ergonomic comfort ever possible for the users.Keywords: haptic feedback, kinaesthetic vibrotactile 3D design, medical simulation training, piezo diaphragm based actuator
Procedia PDF Downloads 166715 Synthesis and Two-Photon Polymerization of a Cytocompatibility Tyramine Functionalized Hyaluronic Acid Hydrogel That Mimics the Chemical, Mechanical, and Structural Characteristics of Spinal Cord Tissue
Authors: James Britton, Vijaya Krishna, Manus Biggs, Abhay Pandit
Abstract:
Regeneration of the spinal cord after injury remains a great challenge due to the complexity of this organ. Inflammation and gliosis at the injury site hinder the outgrowth of axons and hence prevent synaptic reconnection and reinnervation. Hyaluronic acid (HA) is the main component of the spinal cord extracellular matrix and plays a vital role in cell proliferation and axonal guidance. In this study, we have synthesized and characterized a photo-cross-linkable HA-tyramine (tyr) hydrogel from a chemical, mechanical, electrical, biological and structural perspective. From our experimentation, we have found that HA-tyr can be synthesized with controllable degrees of tyramine substitution using click chemistry. The complex modulus (G*) of HA-tyr can be tuned to mimic the mechanical properties of the native spinal cord via optimization of the photo-initiator concentration and UV exposure. We have examined the degree of tyramine-tyramine covalent bonding (polymerization) as a function of UV exposure and photo-initiator use via Photo and Nuclear magnetic resonance spectroscopy. Both swelling and enzymatic degradation assays were conducted to examine the resilience of our 3D printed hydrogel constructs in-vitro. Using a femtosecond 780nm laser, the two-photon polymerization of HA-tyr hydrogel in the presence of riboflavin photoinitiator was optimized. A laser power of 50mW and scan speed of 30,000 μm/s produced high-resolution spatial patterning within the hydrogel with sustained mechanical integrity. Using dorsal root ganglion explants, the cytocompatibility of photo-crosslinked HA-tyr was assessed. Using potentiometry, the electrical conductivity of photo-crosslinked HA-tyr was assessed and compared to that of native spinal cord tissue as a function of frequency. In conclusion, we have developed a biocompatible hydrogel that can be used for photolithographic 3D printing to fabricate tissue engineered constructs for neural tissue regeneration applications.Keywords: 3D printing, hyaluronic acid, photolithography, spinal cord injury
Procedia PDF Downloads 152714 Development of a Systematic Approach to Assess the Applicability of Silver Coated Conductive Yarn
Authors: Y. T. Chui, W. M. Au, L. Li
Abstract:
Recently, wearable electronic textiles have been emerging in today’s market and were developed rapidly since, beside the needs for the clothing uses for leisure, fashion wear and personal protection, there also exist a high demand for the clothing to be capable for function in this electronic age, such as interactive interfaces, sensual being and tangible touch, social fabric, material witness and so on. With the requirements of wearable electronic textiles to be more comfortable, adorable, and easy caring, conductive yarn becomes one of the most important fundamental elements within the wearable electronic textile for interconnection between different functional units or creating a functional unit. The properties of conductive yarns from different companies can vary to a large extent. There are vitally important criteria for selecting the conductive yarns, which may directly affect its optimization, prospect, applicability and performance of the final garment. However, according to the literature review, few researches on conductive yarns on shelf focus on the assessment methods of conductive yarns for the scientific selection of material by a systematic way under different conditions. Therefore, in this study, direction of selecting high-quality conductive yarns is given. It is to test the stability and reliability of the conductive yarns according the problems industrialists would experience with the yarns during the every manufacturing process, in which, this assessment system can be classified into four stage. That is 1) Yarn stage, 2) Fabric stage, 3) Apparel stage and 4) End user stage. Several tests with clear experiment procedures and parameters are suggested to be carried out in each stage. This assessment method suggested that the optimal conducting yarns should be stable in property and resistant to various corrosions at every production stage or during using them. It is expected that this demonstration of assessment method can serve as a pilot study that assesses the stability of Ag/nylon yarns systematically at various conditions, i.e. during mass production with textile industry procedures, and from the consumer perspective. It aims to assist industrialists to understand the qualities and properties of conductive yarns and suggesting a few important parameters that they should be reminded of for the case of higher level of suitability, precision and controllability.Keywords: applicability, assessment method, conductive yarn, wearable electronics
Procedia PDF Downloads 535713 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance
Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian
Abstract:
Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR has been found to be a promising option for directly producing steam to a thermal cycle in order to generate low-cost electricity, but also it has been shown to be promising for indirect steam generation. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the direct steam generation of the linear Fresnel reflector. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.Keywords: concentrated solar power, levelized cost of electricity, linear Fresnel reflectors, steam generation
Procedia PDF Downloads 111712 Monitoring of Cannabis Cultivation with High-Resolution Images
Authors: Levent Basayigit, Sinan Demir, Burhan Kara, Yusuf Ucar
Abstract:
Cannabis is mostly used for drug production. In some countries, an excessive amount of illegal cannabis is cultivated and sold. Most of the illegal cannabis cultivation occurs on the lands far from settlements. In farmlands, it is cultivated with other crops. In this method, cannabis is surrounded by tall plants like corn and sunflower. It is also cultivated with tall crops as the mixed culture. The common method of the determination of the illegal cultivation areas is to investigate the information obtained from people. This method is not sufficient for the determination of illegal cultivation in remote areas. For this reason, more effective methods are needed for the determination of illegal cultivation. Remote Sensing is one of the most important technologies to monitor the plant growth on the land. The aim of this study is to monitor cannabis cultivation area using satellite imagery. The main purpose of this study was to develop an applicable method for monitoring the cannabis cultivation. For this purpose, cannabis was grown as single or surrounded by the corn and sunflower in plots. The morphological characteristics of cannabis were recorded two times per month during the vegetation period. The spectral signature library was created with the spectroradiometer. The parcels were monitored with high-resolution satellite imagery. With the processing of satellite imagery, the cultivation areas of cannabis were classified. To separate the Cannabis plots from the other plants, the multiresolution segmentation algorithm was found to be the most successful for classification. WorldView Improved Vegetative Index (WV-VI) classification was the most accurate method for monitoring the plant density. As a result, an object-based classification method and vegetation indices were sufficient for monitoring the cannabis cultivation in multi-temporal Earthwiev images.Keywords: Cannabis, drug, remote sensing, object-based classification
Procedia PDF Downloads 272711 Design and Development of On-Line, On-Site, In-Situ Induction Motor Performance Analyser
Authors: G. S. Ayyappan, Srinivas Kota, Jaffer R. C. Sheriff, C. Prakash Chandra Joshua
Abstract:
In the present scenario of energy crises, energy conservation in the electrical machines is very important in the industries. In order to conserve energy, one needs to monitor the performance of an induction motor on-site and in-situ. The instruments available for this purpose are very meager and very expensive. This paper deals with the design and development of induction motor performance analyser on-line, on-site, and in-situ. The system measures only few electrical input parameters like input voltage, line current, power factor, frequency, powers, and motor shaft speed. These measured data are coupled to name plate details and compute the operating efficiency of induction motor. This system employs the method of computing motor losses with the help of equivalent circuit parameters. The equivalent circuit parameters of the concerned motor are estimated using the developed algorithm at any load conditions and stored in the system memory. The developed instrument is a reliable, accurate, compact, rugged, and cost-effective one. This portable instrument could be used as a handy tool to study the performance of both slip ring and cage induction motors. During the analysis, the data can be stored in SD Memory card and one can perform various analyses like load vs. efficiency, torque vs. speed characteristics, etc. With the help of the developed instrument, one can operate the motor around its Best Operating Point (BOP). Continuous monitoring of the motor efficiency could lead to Life Cycle Assessment (LCA) of motors. LCA helps in taking decisions on motor replacement or retaining or refurbishment.Keywords: energy conservation, equivalent circuit parameters, induction motor efficiency, life cycle assessment, motor performance analysis
Procedia PDF Downloads 384710 Modeling and Temperature Control of Water-cooled PEMFC System Using Intelligent Algorithm
Authors: Chen Jun-Hong, He Pu, Tao Wen-Quan
Abstract:
Proton exchange membrane fuel cell (PEMFC) is the most promising future energy source owing to its low operating temperature, high energy efficiency, high power density, and environmental friendliness. In this paper, a comprehensive PEMFC system control-oriented model is developed in the Matlab/Simulink environment, which includes the hydrogen supply subsystem, air supply subsystem, and thermal management subsystem. Besides, Improved Artificial Bee Colony (IABC) is used in the parameter identification of PEMFC semi-empirical equations, making the maximum relative error between simulation data and the experimental data less than 0.4%. Operation temperature is essential for PEMFC, both high and low temperatures are disadvantageous. In the thermal management subsystem, water pump and fan are both controlled with the PID controller to maintain the appreciate operation temperature of PEMFC for the requirements of safe and efficient operation. To improve the control effect further, fuzzy control is introduced to optimize the PID controller of the pump, and the Radial Basis Function (RBF) neural network is introduced to optimize the PID controller of the fan. The results demonstrate that Fuzzy-PID and RBF-PID can achieve a better control effect with 22.66% decrease in Integral Absolute Error Criterion (IAE) of T_st (Temperature of PEMFC) and 77.56% decrease in IAE of T_in (Temperature of inlet cooling water) compared with traditional PID. In the end, a novel thermal management structure is proposed, which uses the cooling air passing through the main radiator to continue cooling the secondary radiator. In this thermal management structure, the parasitic power dissipation can be reduced by 69.94%, and the control effect can be improved with a 52.88% decrease in IAE of T_in under the same controller.Keywords: PEMFC system, parameter identification, temperature control, Fuzzy-PID, RBF-PID, parasitic power
Procedia PDF Downloads 85709 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 103708 Rating Agreement: Machine Learning for Environmental, Social, and Governance Disclosure
Authors: Nico Rosamilia
Abstract:
The study evaluates the importance of non-financial disclosure practices for regulators, investors, businesses, and markets. It aims to create a sector-specific set of indicators for environmental, social, and governance (ESG) performances alternative to the ratings of the agencies. The existing literature extensively studies the implementation of ESG rating systems. Conversely, this study has a twofold outcome. Firstly, it should generalize incentive systems and governance policies for ESG and sustainable principles. Therefore, it should contribute to the EU Sustainable Finance Disclosure Regulation. Secondly, it concerns the market and the investors by highlighting successful sustainable investing. Indeed, the study contemplates the effect of ESG adoption practices on corporate value. The research explores the asset pricing angle in order to shed light on the fragmented argument on the finance of ESG. Investors may be misguided about the positive or negative effects of ESG on performances. The paper proposes a different method to evaluate ESG performances. By comparing the results of a traditional econometric approach (Lasso) with a machine learning algorithm (Random Forest), the study establishes a set of indicators for ESG performance. Therefore, the research also empirically contributes to the theoretical strands of literature regarding model selection and variable importance in a finance framework. The algorithms will spit out sector-specific indicators. This set of indicators defines an alternative to the compounded scores of ESG rating agencies and avoids the possible offsetting effect of scores. With this approach, the paper defines a sector-specific set of indicators to standardize ESG disclosure. Additionally, it tries to shed light on the absence of a clear understanding of the direction of the ESG effect on corporate value (the problem of endogeneity).Keywords: ESG ratings, non-financial information, value of firms, sustainable finance
Procedia PDF Downloads 83707 Caregiver Training Results in Accurate Reporting of Stool Frequency
Authors: Matthew Heidman, Susan Dallabrida, Analice Costa
Abstract:
Background:Accuracy of caregiver reported outcomes is essential for infant growth and tolerability study success. Crying/fussiness, stool consistencies, and other gastrointestinal characteristics are important parameters regarding tolerability, and inter-caregiver reporting can see a significant amount of subjectivity and vary greatly within a study, compromising data. This study sought to elucidate how caregiver reported questions related to stool frequency are answered before and after a short amount of training and how training impacts caregivers’ understanding, and how they would answer the question. Methods:A digital survey was issued for 90 daysin the US (n=121) and 30 days in Mexico (n=88), targeting respondents with children ≤4 years of age. Respondents were asked a question in two formats, first without a line of training text and second with a line of training text. The question set was as follows, “If your baby had stool in his/her diaper and you changed the diaper and 10 min later there was more stool in the diaper, how many stools would you report this as?” followed by the same question beginning with “If you were given the instruction that IF there are at least 5 minutes in between stools, then it counts as two (2) stools…”.Four response items were provided for both questions, 1) 2 stools, 2) 1stool, 3) it depends on how much stool was in the first versus the second diaper, 4) There is not enough information to be able to answer the question. Response frequencies between questions were compared. Results: Responses to the question without training saw some variability in the US, with 69% selecting “2 stools”,11% selecting “1 stool”, 14% selecting “it depends on how much stool was in the first versus the second diaper”, and 7% selecting “There is not enough information to be able to answer the question” and in Mexico respondents selected 9%, 78%, 13%, and 0% respectively. However, responses to the question after training saw more consolidation in the US, with 85% of respondents selecting“2 stools,” representing an increase in those selecting the correct answer. Additionally in Mexico, with 84% of respondents selecting “1 episode” representing an increase in the those selecting the correct response. Conclusions: Caregiver reported outcomes are critical for infant growth and tolerability studies, however, they can be highly subjective and see a high variability of responses without guidance. Training is critical to standardize all caregivers’ perspective regarding how to answer questions accurately in order to provide an accurate dataset.Keywords: infant nutrition, clinical trial optimization, stool reporting, decentralized clinical trials
Procedia PDF Downloads 96706 Preparation of Indium Tin Oxide Nanoparticle-Modified 3-Aminopropyltrimethoxysilane-Functionalized Indium Tin Oxide Electrode for Electrochemical Sulfide Detection
Authors: Md. Abdul Aziz
Abstract:
Sulfide ion is water soluble, highly corrosive, toxic and harmful to the human beings. As a result, knowing the exact concentration of sulfide in water is very important. However, the existing detection and quantification methods have several shortcomings, such as high cost, low sensitivity, and massive instrumentation. Consequently, the development of novel sulfide sensor is relevant. Nevertheless, electrochemical methods gained enormous popularity due to a vast improvement in the technique and instrumentation, portability, low cost, rapid analysis and simplicity of design. Successful field application of electrochemical devices still requires vast improvement, which depends on the physical, chemical and electrochemical aspects of the working electrode. The working electrode made of bulk gold (Au) and platinum (Pt) are quite common, being very robust and endowed with good electrocatalytic properties. High cost, and electrode poisoning, however, have so far hindered their practical application in many industries. To overcome these obstacles, we developed a sulfide sensor based on an indium tin oxide nanoparticle (ITONP)-modified ITO electrode. To prepare ITONP-modified ITO, various methods were tested. Drop-drying of ITONPs (aq.) on aminopropyltrimethoxysilane-functionalized ITO (APTMS/ITO) was found to be the best method on the basis of voltammetric analysis of the sulfide ion. ITONP-modified APTMS/ITO (ITONP/APTMS/ITO) yielded much better electrocatalytic properties toward sulfide electro-οxidation than did bare or APTMS/ITO electrodes. The ITONPs and ITONP-modified ITO were also characterized using transmission electron microscopy and field emission scanning electron microscopy, respectively. Optimization of the type of inert electrolyte and pH yielded an ITONP/APTMS/ITO detector whose amperometrically and chronocoulοmetrically determined limits of detection for sulfide in aqueous solution were 3.0 µM and 0.90 µM, respectively. ITONP/APTMS/ITO electrodes which displayed reproducible performances were highly stable and were not susceptible to interference by common contaminants. Thus, the developed electrode can be considered as a promising tool for sensing sulfide.Keywords: amperometry, chronocoulometry, electrocatalytic properties, ITO-nanoparticle-modified ITO, sulfide sensor
Procedia PDF Downloads 131705 Using Time Series NDVI to Model Land Cover Change: A Case Study in the Berg River Catchment Area, Western Cape, South Africa
Authors: Adesuyi Ayodeji Steve, Zahn Munch
Abstract:
This study investigates the use of MODIS NDVI to identify agricultural land cover change areas on an annual time step (2007 - 2012) and characterize the trend in the study area. An ISODATA classification was performed on the MODIS imagery to select only the agricultural class producing 3 class groups namely: agriculture, agriculture/semi-natural, and semi-natural. NDVI signatures were created for the time series to identify areas dominated by cereals and vineyards with the aid of ancillary, pictometry and field sample data. The NDVI signature curve and training samples aided in creating a decision tree model in WEKA 3.6.9. From the training samples two classification models were built in WEKA using decision tree classifier (J48) algorithm; Model 1 included ISODATA classification and Model 2 without, both having accuracies of 90.7% and 88.3% respectively. The two models were used to classify the whole study area, thus producing two land cover maps with Model 1 and 2 having classification accuracies of 77% and 80% respectively. Model 2 was used to create change detection maps for all the other years. Subtle changes and areas of consistency (unchanged) were observed in the agricultural classes and crop practices over the years as predicted by the land cover classification. 41% of the catchment comprises of cereals with 35% possibly following a crop rotation system. Vineyard largely remained constant over the years, with some conversion to vineyard (1%) from other land cover classes. Some of the changes might be as a result of misclassification and crop rotation system.Keywords: change detection, land cover, modis, NDVI
Procedia PDF Downloads 402704 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design
Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian
Abstract:
Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.
Procedia PDF Downloads 283703 Expert System: Debugging Using MD5 Process Firewall
Authors: C. U. Om Kumar, S. Kishore, A. Geetha
Abstract:
An Operating system (OS) is software that manages computer hardware and software resources by providing services to computer programs. One of the important user expectations of the operating system is to provide the practice of defending information from unauthorized access, disclosure, modification, inspection, recording or destruction. Operating system is always vulnerable to the attacks of malwares such as computer virus, worm, Trojan horse, backdoors, ransomware, spyware, adware, scareware and more. And so the anti-virus software were created for ensuring security against the prominent computer viruses by applying a dictionary based approach. The anti-virus programs are not always guaranteed to provide security against the new viruses proliferating every day. To clarify this issue and to secure the computer system, our proposed expert system concentrates on authorizing the processes as wanted and unwanted by the administrator for execution. The Expert system maintains a database which consists of hash code of the processes which are to be allowed. These hash codes are generated using MD5 message-digest algorithm which is a widely used cryptographic hash function. The administrator approves the wanted processes that are to be executed in the client in a Local Area Network by implementing Client-Server architecture and only the processes that match with the processes in the database table will be executed by which many malicious processes are restricted from infecting the operating system. The add-on advantage of this proposed Expert system is that it limits CPU usage and minimizes resource utilization. Thus data and information security is ensured by our system along with increased performance of the operating system.Keywords: virus, worm, Trojan horse, back doors, Ransomware, Spyware, Adware, Scareware, sticky software, process table, MD5, CPU usage and resource utilization
Procedia PDF Downloads 427702 Triangular Hesitant Fuzzy TOPSIS Approach in Investment Projects Management
Authors: Irina Khutsishvili
Abstract:
The presented study develops a decision support methodology for multi-criteria group decision-making problem. The proposed methodology is based on the TOPSIS (Technique for Order Performance by Similarity to Ideal Solution) approach in the hesitant fuzzy environment. The main idea of decision-making problem is a selection of one best alternative or several ranking alternatives among a set of feasible alternatives. Typically, the process of decision-making is based on an evaluation of certain criteria. In many MCDM problems (such as medical diagnosis, project management, business and financial management, etc.), the process of decision-making involves experts' assessments. These assessments frequently are expressed in fuzzy numbers, confidence intervals, intuitionistic fuzzy values, hesitant fuzzy elements and so on. However, a more realistic approach is using linguistic expert assessments (linguistic variables). In the proposed methodology both the values and weights of the criteria take the form of linguistic variables, given by all decision makers. Then, these assessments are expressed in triangular fuzzy numbers. Consequently, proposed approach is based on triangular hesitant fuzzy TOPSIS decision-making model. Following the TOPSIS algorithm, first, the fuzzy positive ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS) are defined. Then the ranking of alternatives is performed in accordance with the proximity of their distances to the both FPIS and FNIS. Based on proposed approach the software package has been developed, which was used to rank investment projects in the real investment decision-making problem. The application and testing of the software were carried out based on the data provided by the ‘Bank of Georgia’.Keywords: fuzzy TOPSIS approach, investment project, linguistic variable, multi-criteria decision making, triangular hesitant fuzzy set
Procedia PDF Downloads 429701 Electrical Machine Winding Temperature Estimation Using Stateful Long Short-Term Memory Networks (LSTM) and Truncated Backpropagation Through Time (TBPTT)
Authors: Yujiang Wu
Abstract:
As electrical machine (e-machine) power density re-querulents become more stringent in vehicle electrification, mounting a temperature sensor for e-machine stator windings becomes increasingly difficult. This can lead to higher manufacturing costs, complicated harnesses, and reduced reliability. In this paper, we propose a deep-learning method for predicting electric machine winding temperature, which can either replace the sensor entirely or serve as a backup to the existing sensor. We compare the performance of our method, the stateful long short-term memory networks (LSTM) with truncated backpropagation through time (TBTT), with that of linear regression, as well as stateless LSTM with/without residual connection. Our results demonstrate the strength of combining stateful LSTM and TBTT in tackling nonlinear time series prediction problems with long sequence lengths. Additionally, in industrial applications, high-temperature region prediction accuracy is more important because winding temperature sensing is typically used for derating machine power when the temperature is high. To evaluate the performance of our algorithm, we developed a temperature-stratified MSE. We propose a simple but effective data preprocessing trick to improve the high-temperature region prediction accuracy. Our experimental results demonstrate the effectiveness of our proposed method in accurately predicting winding temperature, particularly in high-temperature regions, while also reducing manufacturing costs and improving reliability.Keywords: deep learning, electrical machine, functional safety, long short-term memory networks (LSTM), thermal management, time series prediction
Procedia PDF Downloads 100700 Regional Flood Frequency Analysis in Narmada Basin: A Case Study
Authors: Ankit Shah, R. K. Shrivastava
Abstract:
Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency
Procedia PDF Downloads 419699 Digital Transformation: Actionable Insights to Optimize the Building Performance
Authors: Jovian Cheung, Thomas Kwok, Victor Wong
Abstract:
Buildings are entwined with smart city developments. Building performance relies heavily on electrical and mechanical (E&M) systems and services accounting for about 40 percent of global energy use. By cohering the advancement of technology as well as energy and operation-efficient initiatives into the buildings, people are enabled to raise building performance and enhance the sustainability of the built environment in their daily lives. Digital transformation in the buildings is the profound development of the city to leverage the changes and opportunities of digital technologies To optimize the building performance, intelligent power quality and energy management system is developed for transforming data into actions. The system is formed by interfacing and integrating legacy metering and internet of things technologies in the building and applying big data techniques. It provides operation and energy profile and actionable insights of a building, which enables to optimize the building performance through raising people awareness on E&M services and energy consumption, predicting the operation of E&M systems, benchmarking the building performance, and prioritizing assets and energy management opportunities. The intelligent power quality and energy management system comprises four elements, namely the Integrated Building Performance Map, Building Performance Dashboard, Power Quality Analysis, and Energy Performance Analysis. It provides predictive operation sequence of E&M systems response to the built environment and building activities. The system collects the live operating conditions of E&M systems over time to identify abnormal system performance, predict failure trends and alert users before anticipating system failure. The actionable insights collected can also be used for system design enhancement in future. This paper will illustrate how intelligent power quality and energy management system provides operation and energy profile to optimize the building performance and actionable insights to revitalize an existing building into a smart building. The system is driving building performance optimization and supporting in developing Hong Kong into a suitable smart city to be admired.Keywords: intelligent buildings, internet of things technologies, big data analytics, predictive operation and maintenance, building performance
Procedia PDF Downloads 158698 The Inherent Flaw in the NBA Playoff Structure
Authors: Larry Turkish
Abstract:
Introduction: The NBA is an example of mediocrity and this will be evident in the following paper. The study examines and evaluates the characteristics of the NBA champions. As divisions and playoff teams increase, there is an increase in the probability that the champion originates from the mediocre category. Since it’s inception in 1947, the league has been mediocre and continues to this day. Why does a professional league allow any team with a less than 50% winning percentage into the playoffs? As long as the finances flow into the league, owners will not change the current algorithm. The objective of this paper is to determine if the regular season has meaning in finding an NBA champion. Statistical Analysis: The data originates from the NBA website. The following variables are part of the statistical analysis: Rank, the rank of a team relative to other teams in the league based on the regular season win-loss record; Winning Percentage of a team based on the regular season; Divisions, the number of divisions within the league and Playoff Teams, the number of playoff teams relative to a particular season. The following statistical applications are applied to the data: Pearson Product-Moment Correlation, Analysis of Variance, Factor and Regression analysis. Conclusion: The results indicate that the divisional structure and number of playoff teams results in a negative effect on the winning percentage of playoff teams. It also prevents teams with higher winning percentages from accessing the playoffs. Recommendations: 1. Teams that have a winning percentage greater than 1 standard deviation from the mean from the regular season will have access to playoffs. (Eliminates mediocre teams.) 2. Eliminate Divisions (Eliminates weaker teams from access to playoffs.) 3. Eliminate Conferences (Eliminates weaker teams from access to the playoffs.) 4. Have a balanced regular season schedule, (Reduces the number of regular season games, creates equilibrium, reduces bias) that will reduce the need for load management.Keywords: alignment, mediocrity, regression, z-score
Procedia PDF Downloads 130697 Quality Analysis of Vegetables Through Image Processing
Authors: Abdul Khalique Baloch, Ali Okatan
Abstract:
The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria
Procedia PDF Downloads 70696 Development of a Regression Based Model to Predict Subjective Perception of Squeak and Rattle Noise
Authors: Ramkumar R., Gaurav Shinde, Pratik Shroff, Sachin Kumar Jain, Nagesh Walke
Abstract:
Advancements in electric vehicles have significantly reduced the powertrain noise and moving components of vehicles. As a result, in-cab noises have become more noticeable to passengers inside the car. To ensure a comfortable ride for drivers and other passengers, it has become crucial to eliminate undesirable component noises during the development phase. Standard practices are followed to identify the severity of noises based on subjective ratings, but it can be a tedious process to identify the severity of each development sample and make changes to reduce it. Additionally, the severity rating can vary from jury to jury, making it challenging to arrive at a definitive conclusion. To address this, an automotive component was identified to evaluate squeak and rattle noise issue. Physical tests were carried out for random and sine excitation profiles. Aim was to subjectively assess the noise using jury rating method and objectively evaluate the same by measuring the noise. Suitable jury evaluation method was selected for the said activity, and recorded sounds were replayed for jury rating. Objective data sound quality metrics viz., loudness, sharpness, roughness, fluctuation strength and overall Sound Pressure Level (SPL) were measured. Based on this, correlation co-efficients was established to identify the most relevant sound quality metrics that are contributing to particular identified noise issue. Regression analysis was then performed to establish the correlation between subjective and objective data. Mathematical model was prepared using artificial intelligence and machine learning algorithm. The developed model was able to predict the subjective rating with good accuracy.Keywords: BSR, noise, correlation, regression
Procedia PDF Downloads 79695 Direct Approach in Modeling Particle Breakage Using Discrete Element Method
Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang
Abstract:
Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.Keywords: particle bed, breakage models, breakage kinetic, discrete element method
Procedia PDF Downloads 199