Search results for: space time cube
18219 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 31618218 Colour and Curcuminoids Removal from Turmeric Wastewater Using Activated Carbon Adsorption
Authors: Nattawat Thongpraphai, Anusorn Boonpoke
Abstract:
This study aimed to determine the removal of colour and curcuminoids from turmeric wastewater using granular activated carbon (GAC) adsorption. The adsorption isotherm and kinetic behavior of colour and curcuminoids was invested using batch and fixed bed columns tests. The results indicated that the removal efficiency of colour and curcuminoids were 80.13 and 78.64%, respectively at 8 hr of equilibrium time. The adsorption isotherm of colour and curcuminoids were well fitted with the Freundlich adsorption model. The maximum adsorption capacity of colour and curcuminoids were 130 Pt-Co/g and 17 mg/g, respectively. The continuous experiment data showed that the exhaustion concentration of colour and curcuminoids occurred at 39 hr of operation time. The adsorption characteristic of colour and curcuminoids from turmeric wastewater by GAC can be described by the Thomas model. The maximum adsorption capacity obtained from kinetic approach were 39954 Pt-Co/g and 0.0516 mg/kg for colour and curcuminoids, respectively. Moreover, the decrease of colour and curcuminoids concentration during the service time showed a similar trend.Keywords: adsorption, turmeric, colour, curcuminoids, activated carbon
Procedia PDF Downloads 42418217 Fire Characteristic of Commercial Retardant Flame Polycarbonate under Different Oxygen Concentration: Ignition Time and Heat Blockage
Authors: Xuelin Zhang, Shouxiang Lu, Changhai Li
Abstract:
The commercial retardant flame polycarbonate samples as the main high speed train interior carriage material with different thicknesses were investigated in Fire Propagation Apparatus with different external heat fluxes under different oxygen concentration from 12% to 40% to study the fire characteristics and quantitatively analyze the ignition time, mass loss rate and heat blockage. The additives of commercial retardant flame polycarbonate were intumescent and maintained a steady height before ignition when heated. The results showed the transformed ignition time (1/t_ig)ⁿ increased linearly with external flux under different oxygen concentration after deducting the heat blockage due to pyrolysis products, the mass loss rate was taken on linearly with external heat fluxes and the slop of the fitting line for mass loss rate and external heat fluxes decreased with the enhanced oxygen concentration and the heat blockage independent on external heat fluxes rose with oxygen concentration increasing. The inquired data as the input of the fire simulation model was the most important to be used to evaluate the fire risk of commercial retardant flame polycarbonate.Keywords: ignition time, mass loss rate, heat blockage, fire characteristic
Procedia PDF Downloads 28218216 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement
Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini
Abstract:
Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis
Procedia PDF Downloads 13818215 From Comfort to Safety: Assessing the Influence of Car Seat Design on Driver Reaction and Performance
Authors: Sabariah Mohd Yusoff, Qamaruddin Adzeem Muhamad Murad
Abstract:
This study investigates the impact of car seat design on driver response time, addressing a critical gap in understanding how ergonomic features influence both performance and safety. Controlled driving experiments were conducted with fourteen participants (11 male, 3 female) across three locations chosen for their varying traffic conditions to account for differences in driver alertness. Participants interacted with various seat designs while performing driving tasks, and objective metrics such as braking and steering response times were meticulously recorded. Advanced statistical methods, including regression analysis and t-tests, were employed to identify design factors that significantly affect driver response times. Subjective feedback was gathered through detailed questionnaires—focused on driving experience and knowledge of response time—and in-depth interviews. This qualitative data was analyzed thematically to provide insights into driver comfort and usability preferences. The study aims to identify key seat design features that impact driver response time and to gain a deeper understanding of driver preferences for comfort and usability. The findings are expected to inform evidence-based guidelines for optimizing car seat design, ultimately enhancing driver performance and safety. The research offers valuable implications for automotive manufacturers and designers, contributing to the development of seats that improve driver response time and overall driving safety.Keywords: car seat design, driver response time, cognitive driving, ergonomics optimization
Procedia PDF Downloads 2418214 On the Topological Entropy of Nonlinear Dynamical Systems
Authors: Graziano Chesi
Abstract:
The topological entropy plays a key role in linear dynamical systems, allowing one to establish the existence of stabilizing feedback controllers for linear systems in the presence of communications constraints. This paper addresses the determination of a robust value of the topological entropy in nonlinear dynamical systems, specifically the largest value of the topological entropy over all linearized models in a region of interest of the state space. It is shown that a sufficient condition for establishing upper bounds of the sought robust value of the topological entropy can be given in terms of a semidefinite program (SDP), which belongs to the class of convex optimization problems.Keywords: non-linear system, communication constraint, topological entropy
Procedia PDF Downloads 32318213 Simulation and Experimental Research on Pocketing Operation for Toolpath Optimization in CNC Milling
Authors: Rakesh Prajapati, Purvik Patel, Avadhoot Rajurkar
Abstract:
Nowadays, manufacturing industries augment their production lines with modern machining centers backed by CAM software. Several attempts are being made to cut down the programming time for machining complex geometries. Special programs/software have been developed to generate the digital numerical data and to prepare NC programs by using suitable post-processors for different machines. By selecting the tools and manufacturing process then applying tool paths and NC program are generated. More and more complex mechanical parts that earlier were being cast and assembled/manufactured by other processes are now being machined. Majority of these parts require lots of pocketing operations and find their applications in die and mold, turbo machinery, aircraft, nuclear, defense etc. Pocketing operations involve removal of large quantity of material from the metal surface. The modeling of warm cast and clamping a piece of food processing parts which the used of Pro-E and MasterCAM® software. Pocketing operation has been specifically chosen for toolpath optimization. Then after apply Pocketing toolpath, Multi Tool Selection and Reduce Air Time give the results of software simulation time and experimental machining time.Keywords: toolpath, part program, optimization, pocket
Procedia PDF Downloads 28818212 Analysis of Brain Signals Using Neural Networks Optimized by Co-Evolution Algorithms
Authors: Zahra Abdolkarimi, Naser Zourikalatehsamad,
Abstract:
Up to 40 years ago, after recognition of epilepsy, it was generally believed that these attacks occurred randomly and suddenly. However, thanks to the advance of mathematics and engineering, such attacks can be predicted within a few minutes or hours. In this way, various algorithms for long-term prediction of the time and frequency of the first attack are presented. In this paper, by considering the nonlinear nature of brain signals and dynamic recorded brain signals, ANFIS model is presented to predict the brain signals, since according to physiologic structure of the onset of attacks, more complex neural structures can better model the signal during attacks. Contribution of this work is the co-evolution algorithm for optimization of ANFIS network parameters. Our objective is to predict brain signals based on time series obtained from brain signals of the people suffering from epilepsy using ANFIS. Results reveal that compared to other methods, this method has less sensitivity to uncertainties such as presence of noise and interruption in recorded signals of the brain as well as more accuracy. Long-term prediction capacity of the model illustrates the usage of planted systems for warning medication and preventing brain signals.Keywords: co-evolution algorithms, brain signals, time series, neural networks, ANFIS model, physiologic structure, time prediction, epilepsy suffering, illustrates model
Procedia PDF Downloads 28218211 How Vernacular Attributes of Traditional Buildings Can Be Integrated Into Modern Designs - A Case Study of Thirumayilai, Mylapore
Authors: Divya Ramaseshan
Abstract:
The indigenous beauty of a space supported by its local context is unmatchable. India, known to be a hub for varied cultural significance, has one of the best uses of vernacularism. This paper focuses on the traditional houses present in Thirumayilai, Mylapore, one of the oldest and most populous cities in Chennai. The Mylapore houses are known for their Agraharam style with thinnai, courtyard, and sloping roof characteristics. These homes had a combined influence of Indian, Islamic as well as Neo-classical architecture in their design. The design of the houses reflects the lives of Brahmin communities which have almost vanished from sight now. According to the growing demands of local residents as well as urbanization, many houses have been renovated. Some of those structures have been conserved in certain streets showcasing their historical identity. Other structures have either been demolished or redesigned based on people’s needs. Those structures have been identified and studied to understand the comparative features that have been changed. Many of those were in direct relevance to the city’s climate, family size, socializing habits, and local materials. Being a temple town, Mylapore has contour variations sloping towards various water bodies. These factors have been considered for building homes as well. The study aims to list down the possible design guidelines that could be effective in today’s construction field. The pros and cons are analyzed, and the respective methodologies are framed. Our modern construction technologies have brought in the best visual aesthetics in a short frame of time, but the serene touch of teak wood, walking through paved stones, daydreaming in the sunlit courtyards, and chitchatting in porticos are always cherished. Architects around the world are trying hard to achieve such appreciated design elements in upcoming projects with the best use of modern technology. This will also improvise people’s mental health in the comfort of their homes.Keywords: Agraharam, Mylapore, traditional, vernacularism
Procedia PDF Downloads 10018210 Dynamic Analysis of Submerged Floating Tunnel Subjected to Hydrodynamic and Seismic Loadings
Authors: Naik Muhammad, Zahid Ullah, Dong-Ho Choi
Abstract:
Submerged floating tunnel (SFT) is a new solution for the transportation infrastructure through sea straits, fjords, and inland waters, and can be a good alternative to long span suspension bridges. SFT is a massive cylindrical structure that floats at a certain depth below the water surface and subjected to extreme environmental conditions. The identification of dominant structural response of SFT becomes more important due to intended environmental conditions for the design of SFT. The time domain dynamic problem of SFT moored by vertical and inclined mooring cables/anchors is formulated. The dynamic time history analysis of SFT subjected to hydrodynamic and seismic excitations is performed. The SFT is modeled by finite element 3D beam, and the mooring cables are modeled by truss elements. Based on the dynamic time history analysis the displacements and internal forces of SFT were calculated. The response of SFT is presented for hydrodynamic and seismic excitations. The transverse internal forces of SFT were the maximum compared to vertical direction, for both hydrodynamic and seismic cases; this indicates that the cable system provides very small stiffness in transverse direction as compared to vertical direction of SFT.Keywords: submerged floating tunnel, hydrodynamic analysis, time history analysis, seismic response
Procedia PDF Downloads 32918209 Rejuvenating a Space into World Class Environment through Conservation of Heritage Architecture
Authors: Abhimanyu Sharma
Abstract:
India is known for its cultural heritage. As the country is rich in diversity along its length and breadth, the state of Jammu & Kashmir is world famous for the beautiful tourist destinations in the Kashmir region of the state. However, equally destined destinations are also located in Jammu region of the said state. For most of the time in last 50-60 years, the prime focus of development was centered around Kashmir region. But now due to an ever increase in globalization, the focus is decentralizing throughout the country. Pertinently, the potential of Jammu Region needs to be incorporated into the world tourist map in particular. One such spot in the Jammu region of the state is a place called ‘Mubarak Mandi’ – the palace with the royal residence of the Maharaja of Jammu & Kashmir from the Dogra Dynasty, is located in the heart of Jammu city (the winter capital of the state). Since the place is destined with a heritage importance but yet lack the supporting infrastructure to attract the national tourist in general and worldwide tourist at large. For such places, conservation and restoration of the existing structures are the potential tools to overcome the present limiting nature of the place. The rejuvenation of this place through potential and dynamic conservation techniques is targeted through this paper. This paper deals with developing and restoring the areas within the whole campus with appropriate building materials, conservation techniques, etc. to promote a great number of visitors by developing it into a prioritised tourist attraction point. Major thrust shall be on studying the criteria’s for developing the place considering the psychological effect needed to create a socially interactive environment. Additionally, thrust shall be on the spatial elements that will aid in creating a common platform for all kinds of tourists. Accordingly, different conservation guidelines (or model) shall be targeted through this paper so that this Jammu region shall also be an equally contributor to the tourist graph of the country as the Kashmir part is.Keywords: conservation, heritage architecture, rejuvenating, restoration
Procedia PDF Downloads 29718208 Kalman Filter Gain Elimination in Linear Estimation
Authors: Nicholas D. Assimakis
Abstract:
In linear estimation, the traditional Kalman filter uses the Kalman filter gain in order to produce estimation and prediction of the n-dimensional state vector using the m-dimensional measurement vector. The computation of the Kalman filter gain requires the inversion of an m x m matrix in every iteration. In this paper, a variation of the Kalman filter eliminating the Kalman filter gain is proposed. In the time varying case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix and the inversion of an m x m matrix in every iteration. In the time invariant case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix in every iteration. The proposed Kalman filter gain elimination algorithm may be faster than the conventional Kalman filter, depending on the model dimensions.Keywords: discrete time, estimation, Kalman filter, Kalman filter gain
Procedia PDF Downloads 19618207 3D Receiver Operator Characteristic Histogram
Authors: Xiaoli Zhang, Xiongfei Li, Yuncong Feng
Abstract:
ROC curves, as a widely used evaluating tool in machine learning field, are the tradeoff of true positive rate and negative rate. However, they are blamed for ignoring some vital information in the evaluation process, such as the amount of information about the target that each instance carries, predicted score given by each classification model to each instance. Hence, in this paper, a new classification performance method is proposed by extending the Receiver Operator Characteristic (ROC) curves to 3D space, which is denoted as 3D ROC Histogram. In the histogram, theKeywords: classification, performance evaluation, receiver operating characteristic histogram, hardness prediction
Procedia PDF Downloads 31318206 Experimental Study on the Heat Transfer Characteristics of the 200W Class Woofer Speaker
Authors: Hyung-Jin Kim, Dae-Wan Kim, Moo-Yeon Lee
Abstract:
The objective of this study is to experimentally investigate the heat transfer characteristics of 200 W class woofer speaker units with the input voice signals. The temperature and heat transfer characteristics of the 200 W class woofer speaker unit were experimentally tested with the several input voice signals such as 1500 Hz, 2500 Hz, and 5000 Hz respectively. From the experiments, it can be observed that the temperature of the woofer speaker unit including the voice-coil part increases with a decrease in input voice signals. Also, the temperature difference in measured points of the voice coil is increased with decrease of the input voice signals. In addition, the heat transfer characteristics of the woofer speaker in case of the input voice signal of 1500 Hz is 40% higher than that of the woofer speaker in case of the input voice signal of 5000 Hz at the measuring time of 200 seconds. It can be concluded from the experiments that initially the temperature of the voice signal increases rapidly with time, after a certain period of time it increases exponentially. Also during this time dependent temperature change, it can be observed that high voice signal is stable than low voice signal.Keywords: heat transfer, temperature, voice coil, woofer speaker
Procedia PDF Downloads 36018205 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images
Authors: U. Datta
Abstract:
The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection
Procedia PDF Downloads 13518204 A Heart Arrhythmia Prediction Using Machine Learning’s Classification Approach and the Concept of Data Mining
Authors: Roshani S. Golhar, Neerajkumar S. Sathawane, Snehal Dongre
Abstract:
Background and objectives: As the, cardiovascular illnesses increasing and becoming cause of mortality worldwide, killing around lot of people each year. Arrhythmia is a type of cardiac illness characterized by a change in the linearity of the heartbeat. The goal of this study is to develop novel deep learning algorithms for successfully interpreting arrhythmia using a single second segment. Because the ECG signal indicates unique electrical heart activity across time, considerable changes between time intervals are detected. Such variances, as well as the limited number of learning data available for each arrhythmia, make standard learning methods difficult, and so impede its exaggeration. Conclusions: The proposed method was able to outperform several state-of-the-art methods. Also proposed technique is an effective and convenient approach to deep learning for heartbeat interpretation, that could be probably used in real-time healthcare monitoring systemsKeywords: electrocardiogram, ECG classification, neural networks, convolutional neural networks, portable document format
Procedia PDF Downloads 6918203 An Approach to Determine Proper Daylighting Design Solution Considering Visual Comfort and Lighting Energy Efficiency in High-Rise Residential Building
Authors: Zehra Aybike Kılıç, Alpin Köknel Yener
Abstract:
Daylight is a powerful driver in terms of improving human health, enhancing productivity and creating sustainable solutions by minimizing energy demand. A proper daylighting system allows not only a pleasant and attractive visual and thermal environment, but also reduces lighting energy consumption and heating/cooling energy load with the optimization of aperture size, glazing type and solar control strategy, which are the major design parameters of daylighting system design. Particularly, in high-rise buildings where large openings that allow maximum daylight and view out are preferred, evaluation of daylight performance by considering the major parameters of the building envelope design becomes crucial in terms of ensuring occupants’ comfort and improving energy efficiency. Moreover, it is increasingly necessary to examine the daylighting design of high-rise residential buildings, considering the share of residential buildings in the construction sector, the duration of occupation and the changing space requirements. This study aims to identify a proper daylighting design solution considering window area, glazing type and solar control strategy for a high-residential building in terms of visual comfort and lighting energy efficiency. The dynamic simulations are carried out/conducted by DIVA for Rhino version 4.1.0.12. The results are evaluated with Daylight Autonomy (DA) to demonstrate daylight availability in the space and Daylight Glare Probability (DGP) to describe the visual comfort conditions related to glare. Furthermore, it is also analyzed that the lighting energy consumption occurred in each scenario to determine the optimum solution reducing lighting energy consumption by optimizing daylight performance. The results revealed that it is only possible that reduction in lighting energy consumption as well as providing visual comfort conditions in buildings with the proper daylighting design decision regarding glazing type, transparency ratio and solar control device.Keywords: daylighting , glazing type, lighting energy efficiency, residential building, solar control strategy, visual comfort
Procedia PDF Downloads 17618202 In vitro Effects of Salvia officinalis on Bovine Spermatozoa
Authors: Eva Tvrdá, Boris Botman, Marek Halenár, Tomáš Slanina, Norbert Lukáč
Abstract:
In vitro storage and processing of animal semen represents a risk factor to spermatozoa vitality, potentially leading to reduced fertility. A variety of substances isolated from natural sources may exhibit protective or antioxidant properties on the spermatozoon, thus extending the lifespan of stored ejaculates. This study compared the ability of different concentrations of the Salvia officinalis extract on the motility, mitochondrial activity, viability and reactive oxygen species (ROS) production by bovine spermatozoa during different time periods (0, 2, 6 and 24 h) of in vitro culture. Spermatozoa motility was assessed using the Computer-assisted sperm analysis (CASA) system. Cell viability was examined using the metabolic activity MTT assay, the eosin-nigrosin staining technique was used to evaluate the sperm viability and ROS generation was quantified using luminometry. The CASA analysis revealed that the motility in the experimental groups supplemented with 0.5-2 µg/mL Salvia extract was significantly lower in comparison with the control (P<0.05; Time 24 h). At the same time, a long-term exposure of spermatozoa to concentrations ranging between 0.05 µg/mL and 2 µg/mL had a negative impact on the mitochondrial metabolism (P<0.05; Time 24 h). The viability staining revealed that 0.001-1 µg/mL Salvia extract had no effects on bovine male gametes, however 2 µg/mL Salvia had a persisting negative effect on spermatozoa (P<0.05). Furthermore 0.05-2 µg/mL Salvia exhibited an immediate ROS-promoting effect on the sperm culture (P>0.05; Time 0 h and 2 h), which remained significant throughout the entire in vitro culture (P<0.05; Time 24 h). Our results point out to the necessity to examine specific effects the biomolecules present in Salvia officinalis may have individually or collectively on the in vitro sperm vitality and oxidative profile.Keywords: bulls, CASA, MTT test, reactive oxygen species, sage, Salvia officinalis, spermatozoa
Procedia PDF Downloads 33818201 Exploring Solutions in Extended Horava-Lifshitz Gravity
Authors: Aziza Altaibayeva, Ertan Güdekli, Ratbay Myrzakulov
Abstract:
In this letter, we explore exact solutions for the Horava-Lifshitz gravity. We use of an extension of this theory with first order dynamical lapse function. The equations of motion have been derived in a fully consistent scenario. We assume that there are some spherically symmetric families of exact solutions of this extended theory of gravity. We obtain exact solutions and investigate the singularity structures of these solutions. Specially, an exact solution with the regular horizon is found.Keywords: quantum gravity, Horava-Lifshitz gravity, black hole, spherically symmetric space times
Procedia PDF Downloads 58118200 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry
Authors: Dhanuj M. Gandikota
Abstract:
Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry
Procedia PDF Downloads 10218199 Application of Seasonal Autoregressive Integrated Moving Average Model for Forecasting Monthly Flows in Waterval River, South Africa
Authors: Kassahun Birhanu Tadesse, Megersa Olumana Dinka
Abstract:
Reliable future river flow information is basic for planning and management of any river systems. For data scarce river system having only a river flow records like the Waterval River, a univariate time series models are appropriate for river flow forecasting. In this study, a univariate Seasonal Autoregressive Integrated Moving Average (SARIMA) model was applied for forecasting Waterval River flow using GRETL statistical software. Mean monthly river flows from 1960 to 2016 were used for modeling. Different unit root tests and Mann-Kendall trend analysis were performed to test the stationarity of the observed flow time series. The time series was differenced to remove the seasonality. Using the correlogram of seasonally differenced time series, different SARIMA models were identified, their parameters were estimated, and diagnostic check-up of model forecasts was performed using white noise and heteroscedasticity tests. Finally, based on minimum Akaike Information (AIc) and Hannan-Quinn (HQc) criteria, SARIMA (3, 0, 2) x (3, 1, 3)12 was selected as the best model for Waterval River flow forecasting. Therefore, this model can be used to generate future river information for water resources development and management in Waterval River system. SARIMA model can also be used for forecasting other similar univariate time series with seasonality characteristics.Keywords: heteroscedasticity, stationarity test, trend analysis, validation, white noise
Procedia PDF Downloads 20518198 Some Integral Inequalities of Hermite-Hadamard Type on Time Scale and Their Applications
Authors: Artion Kashuri, Rozana Liko
Abstract:
In this paper, the authors establish an integral identity using delta differentiable functions. By applying this identity, some new results via a general class of convex functions with respect to two nonnegative functions on a time scale are given. Also, for suitable choices of nonnegative functions, some special cases are deduced. Finally, in order to illustrate the efficiency of our main results, some applications to special means are obtained as well. We hope that current work using our idea and technique will attract the attention of researchers working in mathematical analysis, mathematical inequalities, numerical analysis, special functions, fractional calculus, quantum mechanics, quantum calculus, physics, probability and statistics, differential and difference equations, optimization theory, and other related fields in pure and applied sciences.Keywords: convex functions, Hermite-Hadamard inequality, special means, time scale
Procedia PDF Downloads 15018197 Electrochemical Studies of Si, Si-Ge- and Ge-Air Batteries
Authors: R. C. Sharma, Rishabh Bansal, Prajwal Menon, Manoj K. Sharma
Abstract:
Silicon-air battery is highly promising for electric vehicles due to its high theoretical energy density (8470 Whkg⁻¹) and its discharge products are non-toxic. For the first time, pure silicon and germanium powders are used as anode material. Nickel wire meshes embedded with charcoal and manganese dioxide powder as cathode and concentrated potassium hydroxide is used as electrolyte. Voltage-time curves have been presented in this study for pure silicon and germanium powder and 5% and 10% germanium with silicon powder. Silicon powder cell assembly gives a stable voltage of 0.88 V for ~20 minutes while Si-Ge provides cell voltage of 0.80-0.76 V for ~10-12 minutes, and pure germanium cell provides cell voltage 0.80-0.76 V for ~30 minutes. The cell voltage is higher for concentrated (10%) sodium hydroxide solution (1.08 V) and it is stable for ~40 minutes. A sharp decrease in cell voltage beyond 40 min may be due to rapid corrosion.Keywords: Silicon-air battery, Germanium-air battery, voltage-time curve, open circuit voltage, Anodic corrosion
Procedia PDF Downloads 23818196 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.Keywords: conditional generative adversarial net, market and credit risk management, neural network, time series
Procedia PDF Downloads 14318195 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks
Authors: Yen-Luan Chen
Abstract:
Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability
Procedia PDF Downloads 27518194 Cable De-Commissioning of Legacy Accelerators at CERN
Authors: Adya Uluwita, Fernando Pedrosa, Georgi Georgiev, Christian Bernard, Raoul Masterson
Abstract:
CERN is an international organisation funded by 23 countries that provide the particle physics community with excellence in particle accelerators and other related facilities. Founded in 1954, CERN has a wide range of accelerators that allow groundbreaking science to be conducted. Accelerators bring particles to high levels of energy and make them collide with each other or with fixed targets, creating specific conditions that are of high interest to physicists. A chain of accelerators is used to ramp up the energy of particles and eventually inject them into the largest and most recent one: the Large Hadron Collider (LHC). Among this chain of machines is, for instance the Proton Synchrotron, which was started in 1959 and is still in operation. These machines, called "injectors”, keep evolving over time, as well as the related infrastructure. Massive decommissioning of obsolete cables started in 2015 at CERN in the frame of the so-called "injectors de-cabling project phase 1". Its goal was to replace aging cables and remove unused ones, freeing space for new cables necessary for upgrades and consolidation campaigns. To proceed with the de-cabling, a project co-ordination team was assembled. The start of this project led to the investigation of legacy cables throughout the organisation. The identification of cables stacked over half a century proved to be arduous. Phase 1 of the injectors de-cabling was implemented for 3 years with success after overcoming some difficulties. Phase 2, started 3 years later, focused on improving safety and structure with the introduction of a quality assurance procedure. This paper discusses the implementation of this quality assurance procedure throughout phase 2 of the project and the transition between the two phases. Over hundreds of kilometres of cable were removed in the injectors complex at CERN from 2015 to 2023.Keywords: CERN, de-cabling, injectors, quality assurance procedure
Procedia PDF Downloads 9318193 Distributed Cost-Based Scheduling in Cloud Computing Environment
Authors: Rupali, Anil Kumar Jaiswal
Abstract:
Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc. Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively. Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.Keywords: physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model
Procedia PDF Downloads 16818192 A Non-linear Damage Model For The Annulus Of the Intervertebral Disc Under Cyclic Loading, Including Recovery
Authors: Shruti Motiwale, Xianlin Zhou, Reuben H. Kraft
Abstract:
Military and sports personnel are often required to wear heavy helmets for extended periods of time. This leads to excessive cyclic loads on the neck and an increased chance of injury. Computational models offer one approach to understand and predict the time progression of disc degeneration under severe cyclic loading. In this paper, we have applied an analytic non-linear damage evolution model to estimate damage evolution in an intervertebral disc due to cyclic loads over decade-long time periods. We have also proposed a novel strategy for inclusion of recovery in the damage model. Our results show that damage only grows 20% in the initial 75% of the life, growing exponentially in the remaining 25% life. The analysis also shows that it is crucial to include recovery in a damage model.Keywords: cervical spine, computational biomechanics, damage evolution, intervertebral disc, continuum damage mechanics
Procedia PDF Downloads 56818191 The Use of Boosted Multivariate Trees in Medical Decision-Making for Repeated Measurements
Authors: Ebru Turgal, Beyza Doganay Erdogan
Abstract:
Machine learning aims to model the relationship between the response and features. Medical decision-making researchers would like to make decisions about patients’ course and treatment, by examining the repeated measurements over time. Boosting approach is now being used in machine learning area for these aims as an influential tool. The aim of this study is to show the usage of multivariate tree boosting in this field. The main reason for utilizing this approach in the field of decision-making is the ease solutions of complex relationships. To show how multivariate tree boosting method can be used to identify important features and feature-time interaction, we used the data, which was collected retrospectively from Ankara University Chest Diseases Department records. Dataset includes repeated PF ratio measurements. The follow-up time is planned for 120 hours. A set of different models is tested. In conclusion, main idea of classification with weighed combination of classifiers is a reliable method which was shown with simulations several times. Furthermore, time varying variables will be taken into consideration within this concept and it could be possible to make accurate decisions about regression and survival problems.Keywords: boosted multivariate trees, longitudinal data, multivariate regression tree, panel data
Procedia PDF Downloads 20318190 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow
Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen
Abstract:
Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.Keywords: flux limiters, implicit SGS, MILES, OpenFOAM, turbulence statistics
Procedia PDF Downloads 190