Search results for: kernel density estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5287

Search results for: kernel density estimation

4927 Etude 3D Quantum Numerical Simulation of Performance in the HEMT

Authors: A. Boursali, A. Guen-Bouazza

Abstract:

We present a simulation of a HEMT (high electron mobility transistor) structure with and without a field plate. We extract the device characteristics through the analysis of DC, AC and high frequency regimes, as shown in this paper. This work demonstrates the optimal device with a gate length of 15 nm, InAlN/GaN heterostructure and field plate structure, making it superior to modern HEMTs when compared with otherwise equivalent devices. This improves the ability to bear the burden of the current density passes in the channel. We have demonstrated an excellent current density, as high as 2.05 A/m, a peak extrinsic transconductance of 0.59S/m at VDS=2 V, and cutting frequency cutoffs of 638 GHz in the first HEMT and 463 GHz for Field plate HEMT., maximum frequency of 1.7 THz, maximum efficiency of 73%, maximum breakdown voltage of 400 V, leakage current density IFuite=1 x 10-26 A, DIBL=33.52 mV/V and an ON/OFF current density ratio higher than 1 x 1010. These values were determined through the simulation by deriving genetic and Monte Carlo algorithms that optimize the design and the future of this technology.

Keywords: HEMT, silvaco, field plate, genetic algorithm, quantum

Procedia PDF Downloads 323
4926 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 235
4925 Food Processing Technology and Packaging: A Case Study of Indian Cashew-Nut Industry

Authors: Parashram Jakappa Patil

Abstract:

India is the global leader in world cashew business and cashew-nut industry is one of the important food processing industries in world. However India is the largest producer, processor, exporter and importer eschew in the world. India is providing cashew to the rest of the world. India is meeting world demand of cashew. India has a tremendous potential of cashew production and export to other countries. Every year India earns more than 2000 cores rupees through cashew trade. Cashew industry is one of the important small scale industries in the country which is playing significant role in rural development. It is generating more than 400000 jobs at remote area and 95% cashew worker are women, it is giving income to poor cashew farmers, majority cashew processing units are small and cottage, it is helping to stop migration from young farmers for employment opportunities, it is motivation rural entrepreneurship development and it is also helping to environment protection etc. Hence India cashew business is very important agribusiness in India which has potential make inclusive development. World Bank and IMF recognized cashew-nut industry is one the important tool for poverty eradication at global level. It shows important of cashew business and its strong existence in India. In spite of such huge potential cashew processing industry is facing different problems such as lack of infrastructure ability, lack of supply of raw cashew, lack of availability of finance, collection of raw cashew, unavailability of warehouse, marketing of cashew kernels, lack of technical knowledge and especially processing technology and packaging of finished products. This industry has great prospects such as scope for more cashew cultivation and cashew production, employment generation, formation of cashew processing units, alcohols production from cashew apple, shield oil production, rural development, poverty elimination, development of social and economic backward class and environment protection etc. This industry has domestic as well as foreign market; India has tremendous potential in this regard. The cashew is a poor men’s crop but rich men’s food. The cashew is a source of income and livelihood for poor farmers. Cashew-nut industry may play very important role in the development of hilly region. The objectives of this paper are to identify problems of cashew processing and use of processing technology, problems of cashew kernel packaging, evolving of cashew processing technology over the year and its impact on final product and impact of good processing by adopting appropriate technology packaging on international trade of cashew-nut. The most important problem of cashew processing industry is that is processing and packaging. Bad processing reduce the quality of cashew kernel at large extent especially broken of cashew kernel which has very less price in market compare to whole cashew kernel and not eligible for export. On the other hand if there is no good packaging of cashew kernel will get moisture which destroy test of it. International trade of cashew-nut is depend of two things one is cashew processing and other is packaging. This study has strong relevance because cashew-nut industry is the labour oriented, where processing technology is not playing important role because 95% processing work is manual. Hence processing work was depending on physical performance of worker which makes presence of large workforce inevitable. There are many cashew processing units closed because they are not getting sufficient work force. However due to advancement in technology slowly this picture is changing and processing work get improve. Therefore it is interesting to explore all the aspects in context of cashew processing and packaging of cashew business.

Keywords: cashew, processing technology, packaging, international trade, change

Procedia PDF Downloads 394
4924 An Energy Detection-Based Algorithm for Cooperative Spectrum Sensing in Rayleigh Fading Channel

Authors: H. Bakhshi, E. Khayyamian

Abstract:

Cognitive radios have been recognized as one of the most promising technologies dealing with the scarcity of the radio spectrum. In cognitive radio systems, secondary users are allowed to utilize the frequency bands of primary users when the bands are idle. Hence, how to accurately detect the idle frequency bands has attracted many researchers’ interest. Detection performance is sensitive toward noise power and gain fluctuation. Since signal to noise ratio (SNR) between primary user and secondary users are not the same and change over the time, SNR and noise power estimation is essential. In this paper, we present a cooperative spectrum sensing algorithm using SNR estimation to improve detection performance in the real situation.

Keywords: cognitive radio, cooperative spectrum sensing, energy detection, SNR estimation, spectrum sensing, rayleigh fading channel

Procedia PDF Downloads 425
4923 Localising Gauss’s Law and the Electric Charge Induction on a Conducting Sphere

Authors: Sirapat Lookrak, Anol Paisal

Abstract:

Space debris has numerous manifestations, including ferro-metalize and non-ferrous. The electric field will induce negative charges to split from positive charges inside the space debris. In this research, we focus only on conducting materials. The assumption is that the electric charge density of a conducting surface is proportional to the electric field on that surface due to Gauss's Law. We are trying to find the induced charge density from an external electric field perpendicular to a conducting spherical surface. An object is a sphere on which the external electric field is not uniform. The electric field is, therefore, considered locally. The localised spherical surface is a tangent plane, so the Gaussian surface is a very small cylinder, and every point on a spherical surface has its own cylinder. The electric field from a circular electrode has been calculated in near-field and far-field approximation and shown Explanation Touchless maneuvering space debris orbit properties. The electric charge density calculation from a near-field and far-field approximation is done.

Keywords: near-field approximation, far-field approximation, localized Gauss’s law, electric charge density

Procedia PDF Downloads 96
4922 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches

Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.

Abstract:

A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.

Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency

Procedia PDF Downloads 107
4921 Utilization of Sphagnum Moss as a Jeepney Emission Filter for Smoke Density Reduction

Authors: Monique Joyce L. Disamburum, Nicole C. Faustino, Ashley Angela A. Fazon, Jessie F. Rubonal

Abstract:

Traditional jeepneys contribute significantly to air pollution in the Philippines, negatively affecting both the environment and people. In response, the researchers investigated Sphagnum moss which has high adsorbent properties and can be used as a filter. Therefore, this research aims to create a muffler filter additive to reduce the smoke density emitted by traditional jeepneys. Various materials, such as moss, cornstarch, a metal pipe, bolts, and a papermaking screen frame, were gathered. The moss underwent a blending process with a cornstarch mixture until it achieved a pulp-like consistency, subsequently molded using a papermaking screen frame and left for sun drying. Following this, a metal prototype was created by drilling holes around the tumbler and inserting bolts. The mesh wire containing the filter was carefully placed into the hole, secured by two bolts. In the final phase, there were three setups, each undergoing one trial in the LTO emission testing. Each trial consisted of six rounds of purging, and after that the average smoke density was measured. According to the findings of this study, the filter aided in lowering the average smoke density. The one layer setup produced an average of 1.521, whereas the two layer setup produced an average of 1.082. Using One-Way Anova, it was demonstrated that there is a significant difference between the setups. Furthermore, the Tukey HSD Post Hoc test revealed that Setups A and C differed significantly (p = 0.04604), with Setup C being the most successful in reducing smoke density (mean difference -1.4128). Overall, the researchers came to the conclusion that employing Sphagnum moss as a filter can lower the average smoke density released by traditional jeepneys.

Keywords: sphagnum moss, Jeepney filter, smoke density, Jeepney emission

Procedia PDF Downloads 19
4920 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 562
4919 Generation and Diagnostics of Atmospheric Pressure Dielectric Barrier Discharge in Argon/Air

Authors: R. Shrestha, D. P. Subedi, R. B. Tyata, C. S. Wong,

Abstract:

In this paper, a technique for the determination of electron temperatures and electron densities in atmospheric pressure Argon/air discharge by the analysis of optical emission spectra (OES) is reported. The discharge was produced using a high voltage (0-20) kV power supply operating at a frequency of 27 kHz in parallel electrode system, with glass as dielectric. The dielectric layers covering the electrodes act as current limiters and prevent the transition to an arc discharge. Optical emission spectra in the range of (300nm-850nm) were recorded for the discharge with different inter electrode gap keeping electric field constant. Electron temperature (Te) and electron density (ne) are estimated from electrical and optical methods. Electron density was calculated using power balance method. The optical methods are related with line intensity ratio from the relative intensities of Ar-I and Ar-II lines in Argon plasma. The electron density calculated by using line intensity ratio method was compared with the electron density calculated by stark broadening method. The effect of dielectric thickness on plasma parameters (Te and ne) have also been studied and found that Te and ne increases as thickness of dielectric decrease for same inter electrode distance and applied voltage.

Keywords: electron density, electron temperature, optical emission spectra,

Procedia PDF Downloads 467
4918 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 108
4917 Effect of Irrigation Regime and Plant Density on Chickpea (Cicer arietinum L.) Yield in a Semi-Arid Environment

Authors: Atif Naim, Faisal E. Ahmed, Sershen

Abstract:

A field experiment was conducted for two consecutive winter seasons at the Demonstration Farm of the Faculty of Agriculture, University of Khartoum, Sudan, to study effects of different levels of irrigation regime and plant density on yield of introduced small seeded (desi type) chickpea cultivar (ILC 482). The experiment was laid out in a 3X3 factorial split-plot design with 4 replications. The treatments consisted of three irrigation regimes (designated as follows: I1 = optimum irrigation, I2 = moderate stress and I3 = severe stress; this corresponded with irrigation after drainage of 50%, 75% and 100% of available water based on 70%, 60% and 50% of field capacity, respectively) assigned as main plots and three plant densities (D₁=20, D₂= 40 and D₃= 60 plants/m²) assigned as subplots. The results indicated that the yield components (number of pods per plant, number of seeds per pod, 100 seed weight), seed yield per plant, harvest index and yield per unit area of chickpea were significantly (p < 0.05) affected by irrigation regime. Decreasing irrigation regime significantly (p < 0.05) decreased all measured parameters. Alternatively, increasing plant density significantly (p < 0.05) decreased the number of pods and seed yield per plant and increased seed yield per unit area. While number of seeds per pod and harvest index were not significantly (p > 0.05) affected by plant density. Interaction between irrigation regime and plant density was also significantly (p < 0.05) affected all measured parameters of yield, except for harvest index. It could be concluded that the best irrigation regime was full irrigation (after drainage of 50% available water at 70% field capacity) and the optimal plant density was 20 plants/m² under conditions of semi-arid regions.

Keywords: irrigation regime, Cicer arietinum, chickpea, plant density

Procedia PDF Downloads 195
4916 Lead-Time Estimation Approach Using the Process Capability Index

Authors: Abdel-Aziz M. Mohamed

Abstract:

This research proposes a methodology to estimate the customer order lead time in the supply chain based on the process capability index. The cases when the process output is normally distributed and when it is not are considered. The relationships between the system capability indices in both service and manufacturing applications, delivery system reliability and the percentages of orders delivered after their promised due dates are presented. The proposed method can be used to examine the current process capability to deliver the orders before the promised lead-time. If the system was found to be incapable, the method can be used to help revise the current lead-time to a proper value according to the service reliability level selected by the management. Numerical examples and a case study describing the lead time estimation methodology and testing the system capability of delivering the orders before their promised due date are illustrated.

Keywords: lead-time estimation, process capability index, delivery system reliability, statistical analysis, service achievement index, service quality

Procedia PDF Downloads 533
4915 A Study on the Performance of 2-PC-D Classification Model

Authors: Nurul Aini Abdul Wahab, Nor Syamim Halidin, Sayidatina Aisah Masnan, Nur Izzati Romli

Abstract:

There are many applications of principle component method for reducing the large set of variables in various fields. Fisher’s Discriminant function is also a popular tool for classification. In this research, the researcher focuses on studying the performance of Principle Component-Fisher’s Discriminant function in helping to classify rice kernels to their defined classes. The data were collected on the smells or odour of the rice kernel using odour-detection sensor, Cyranose. 32 variables were captured by this electronic nose (e-nose). The objective of this research is to measure how well a combination model, between principle component and linear discriminant, to be as a classification model. Principle component method was used to reduce all 32 variables to a smaller and manageable set of components. Then, the reduced components were used to develop the Fisher’s Discriminant function. In this research, there are 4 defined classes of rice kernel which are Aromatic, Brown, Ordinary and Others. Based on the output from principle component method, the 32 variables were reduced to only 2 components. Based on the output of classification table from the discriminant analysis, 40.76% from the total observations were correctly classified into their classes by the PC-Discriminant function. Indirectly, it gives an idea that the classification model developed has committed to more than 50% of misclassifying the observations. As a conclusion, the Fisher’s Discriminant function that was built on a 2-component from PCA (2-PC-D) is not satisfying to classify the rice kernels into its defined classes.

Keywords: classification model, discriminant function, principle component analysis, variable reduction

Procedia PDF Downloads 305
4914 The Comparison of Joint Simulation and Estimation Methods for the Geometallurgical Modeling

Authors: Farzaneh Khorram

Abstract:

This paper endeavors to construct a block model to assess grinding energy consumption (CCE) and pinpoint blocks with the highest potential for energy usage during the grinding process within a specified region. Leveraging geostatistical techniques, particularly joint estimation, or simulation, based on geometallurgical data from various mineral processing stages, our objective is to forecast CCE across the study area. The dataset encompasses variables obtained from 2754 drill samples and a block model comprising 4680 blocks. The initial analysis encompassed exploratory data examination, variography, multivariate analysis, and the delineation of geological and structural units. Subsequent analysis involved the assessment of contacts between these units and the estimation of CCE via cokriging, considering its correlation with SPI. The selection of blocks exhibiting maximum CCE holds paramount importance for cost estimation, production planning, and risk mitigation. The study conducted exploratory data analysis on lithology, rock type, and failure variables, revealing seamless boundaries between geometallurgical units. Simulation methods, such as Plurigaussian and Turning band, demonstrated more realistic outcomes compared to cokriging, owing to the inherent characteristics of geometallurgical data and the limitations of kriging methods.

Keywords: geometallurgy, multivariate analysis, plurigaussian, turning band method, cokriging

Procedia PDF Downloads 19
4913 First Principle Study of Electronic and Optical Properties of YNi₄Si-Type HoNi₄Si Compound

Authors: D. K. Maurya, S. M. Saini

Abstract:

We investigate theoretically the electronic and optical properties of YNi₄Si-type HoNi₄Si compound from first principle calculations. Calculations are performed using full-potential augmented plane wave (FPLAPW) method in the frame work of density functional theory (DFT). The Coulomb corrected local-spin density approximation (LSDA+U) in the self-interaction correction (SIC) has been used for exchange-correlation potential. Analysis of the calculated band structure of HoNi₄Si compound demonstrates their metallic character. We found Ni-3d states mainly contribute to density of states from -5.0 eV to the Fermi level while the Ho-f states peak stands tall in comparison to the small contributions made by the Ni-d and Ho-d states above Fermi level, which is consistent with experiment, in HoNi4Si compound. Our calculated optical conductivity compares well with the experimental data and the results are analyzed in the light of band to band transitions.

Keywords: electronic properties, density of states, optical properties, LSDA+U approximation, YNi₄Si-type HoNi4Si compound

Procedia PDF Downloads 211
4912 Habitat Preference of Lepidoptera (Butterflies), Using Geospatial Analysis in Diyasaru Wetland Park, Western Province, Sri Lanka

Authors: Hiripurage Mallika Sandamali Dissanayaka

Abstract:

Butterflies are found everywhere on Earth, helping flowering plants reproduce through pollination. Wetlands perform many valuable functions such as providing wildlife habitat. Diyasaru Wetland Park was chosen as the study site. It is located in a highly urbanized area of Sri Jayawardenepura Kotte, Sri Lanka. A distribution map was prepared to increase butterfly habitat in the urbanized area, and research was conducted to determine the most suitable sections for using it. As this wetland has footpaths for walking, line transect surveys were used to mark species within the sampling area, and directly observed species were recorded. All data collection was done from 0900 to 1200 hours and 1300 to 1600 hours and fieldwork was done from 11 February 2020 to 20 January 2021. ED binoculars (10.5x45), DSLR cameras (Canon EOS/EFS5 mm 3.5-5.6), and Garmin GPS (Etrex 10) were used to observe butterfly species, identify locations, and take photographs as evidence. Analyzing their habitats using GIS (ArcGIS Pro) to identify their distribution within the park premises, the distribution density of the known size of the population was calculated for each point by kernel density, and local similarity values were calculated for each pair of corresponding features through hotspot analysis, and cell values were determined by inverse distance weighting (IDW) using a linearly weighted combination of a set of sample points. According to the maps prepared to predict the distribution of butterflies in this park, the high level of distribution or favorable areas were near flower gardens and meadows, but some individual species prefer habitats that are more suitable for their life activities, so they live in other areas. Sixty-six (66) species belonging to six (6) families have been recorded in the premises. Sixty (60) species of least concern (LC), two (2) near threatened (NT), and four (4) vulnerable (VU) species have been recorded, and several new species, such as Plum Judy (Abisara echerius), were reported. The outcome of the study will form the basis for decision-making by the Sri Lanka Land Development (SLLD) Corporation for the future development and maintenance of the park.

Keywords: wetland, Lepidoptera, habitat, urban, west

Procedia PDF Downloads 17
4911 Formulating a Flexible-Spread Fuzzy Regression Model Based on Dissemblance Index

Authors: Shih-Pin Chen, Shih-Syuan You

Abstract:

This study proposes a regression model with flexible spreads for fuzzy input-output data to cope with the situation that the existing measures cannot reflect the actual estimation error. The main idea is that a dissemblance index (DI) is carefully identified and defined for precisely measuring the actual estimation error. Moreover, the graded mean integration (GMI) representation is adopted for determining more representative numeric regression coefficients. Notably, to comprehensively compare the performance of the proposed model with other ones, three different criteria are adopted. The results from commonly used test numerical examples and an application to Taiwan's business monitoring indicator illustrate that the proposed dissemblance index method not only produces valid fuzzy regression models for fuzzy input-output data, but also has satisfactory and stable performance in terms of the total estimation error based on these three criteria.

Keywords: dissemblance index, forecasting, fuzzy sets, linear regression

Procedia PDF Downloads 330
4910 TACTICAL: Ram Image Retrieval in Linux Using Protected Mode Architecture’s Paging Technique

Authors: Sedat Aktas, Egemen Ulusoy, Remzi Yildirim

Abstract:

This article explains how to get a ram image from a computer with a Linux operating system and what steps should be followed while getting it. What we mean by taking a ram image is the process of dumping the physical memory instantly and writing it to a file. This process can be likened to taking a picture of everything in the computer’s memory at that moment. This process is very important for tools that analyze ram images. Volatility can be given as an example because before these tools can analyze ram, images must be taken. These tools are used extensively in the forensic world. Forensic, on the other hand, is a set of processes for digitally examining the information on any computer or server on behalf of official authorities. In this article, the protected mode architecture in the Linux operating system is examined, and the way to save the image sample of the kernel driver and system memory to disk is followed. Tables and access methods to be used in the operating system are examined based on the basic architecture of the operating system, and the most appropriate methods and application methods are transferred to the article. Since there is no article directly related to this study on Linux in the literature, it is aimed to contribute to the literature with this study on obtaining ram images. LIME can be mentioned as a similar tool, but there is no explanation about the memory dumping method of this tool. Considering the frequency of use of these tools, the contribution of the study in the field of forensic medicine has been the main motivation of the study due to the intense studies on ram image in the field of forensics.

Keywords: linux, paging, addressing, ram-image, memory dumping, kernel modules, forensic

Procedia PDF Downloads 77
4909 A Multi-Scale Approach to Space Use: Habitat Disturbance Alters Behavior, Movement and Energy Budgets in Sloths (Bradypus variegatus)

Authors: Heather E. Ewart, Keith Jensen, Rebecca N. Cliffe

Abstract:

Fragmentation and changes in the structural composition of tropical forests – as a result of intensifying anthropogenic disturbance – are increasing pressures on local biodiversity. Species with low dispersal abilities have some of the highest extinction risks in response to environmental change, as even small-scale environmental variation can substantially impact their space use and energetic balance. Understanding the implications of forest disturbance is therefore essential, ultimately allowing for more effective and targeted conservation initiatives. Here, the impact of different levels of forest disturbance on the space use, energetics, movement and behavior of 18 brown-throated sloths (Bradypus variegatus) were assessed in the South Caribbean of Costa Rica. A multi-scale framework was used to measure forest disturbance, including large-scale (landscape-level classifications) and fine-scale (within and surrounding individual home ranges) forest composition. Three landscape-level classifications were identified: primary forests (undisturbed), secondary forests (some disturbance, regenerating) and urban forests (high levels of disturbance and fragmentation). Finer-scale forest composition was determined using measurements of habitat structure and quality within and surrounding individual home ranges for each sloth (home range estimates were calculated using autocorrelated kernel density estimation [AKDE]). Measurements of forest quality included tree connectivity, density, diameter and height, species richness, and percentage of canopy cover. To determine space use, energetics, movement and behavior, six sloths in urban forests, seven sloths in secondary forests and five sloths in primary forests were tracked using a combination of Very High Frequency (VHF) radio transmitters and Global Positioning System (GPS) technology over an average period of 120 days. All sloths were also fitted with micro data-loggers (containing tri-axial accelerometers and pressure loggers) for an average of 30 days to allow for behavior-specific movement analyses (data analysis ongoing for data-loggers and primary forest sloths). Data-loggers included determination of activity budgets, circadian rhythms of activity and energy expenditure (using the vector of the dynamic body acceleration [VeDBA] as a proxy). Analyses to date indicate that home range size significantly increased with the level of forest disturbance. Female sloths inhabiting secondary forests averaged 0.67-hectare home ranges, while female sloths inhabiting urban forests averaged 1.93-hectare home ranges (estimates are represented by median values to account for the individual variation in home range size in sloths). Likewise, home range estimates for male sloths were 2.35 hectares in secondary forests and 4.83 in urban forests. Sloths in urban forests also used nearly double (median = 22.5) the number of trees as sloths in the secondary forest (median = 12). These preliminary data indicate that forest disturbance likely heightens the energetic requirements of sloths, a species already critically limited by low dispersal ability and rates of energy acquisition. Energetic and behavioral analyses from the data-loggers will be considered in the context of fine-scale forest composition measurements (i.e., habitat quality and structure) and are expected to reflect the observed home range and movement constraints. The implications of these results are far-reaching, presenting an opportunity to define a critical index of habitat connectivity for low dispersal species such as sloths.

Keywords: biodiversity conservation, forest disturbance, movement ecology, sloths

Procedia PDF Downloads 72
4908 Thermal Radiation Effect on Mixed Convection Boundary Layer Flow over a Vertical Plate with Varying Density and Volumetric Expansion Coefficient

Authors: Sadia Siddiqa, Z. Khan, M. A. Hossain

Abstract:

In this article, the effect of thermal radiation on mixed convection boundary layer flow of a viscous fluid along a highly heated vertical flat plate is considered with varying density and volumetric expansion coefficient. The density of the fluid is assumed to vary exponentially with temperature, however; volumetric expansion coefficient depends linearly on temperature. Boundary layer equations are transformed into convenient form by introducing primitive variable formulations. Solutions of transformed system of equations are obtained numerically through implicit finite difference method along with Gaussian elimination technique. Results are discussed in view of various parameters, like thermal radiation parameter, volumetric expansion parameter and density variation parameter on the wall shear stress and heat transfer rate. It is concluded from the present investigation that increase in volumetric expansion parameter decreases wall shear stress and enhances heat transfer rate.

Keywords: thermal radiation, mixed convection, variable density, variable volumetric expansion coefficient

Procedia PDF Downloads 345
4907 Phillips Curve Estimation in an Emerging Economy: Evidence from Sub-National Data of Indonesia

Authors: Harry Aginta

Abstract:

Using Phillips curve framework, this paper seeks for new empirical evidence on the relationship between inflation and output in a major emerging economy. By exploiting sub-national data, the contribution of this paper is threefold. First, it resolves the issue of using on-target national inflation rates that potentially causes weakening inflation-output nexus. This is very relevant for Indonesia as its central bank has been adopting inflation targeting framework based on national consumer price index (CPI) inflation. Second, the study tests the relevance of mining sector in output gap estimation. The test for mining sector is important to control for the effects of mining regulation and nominal effects of coal prices on real economic activities. Third, the paper applies panel econometric method by incorporating regional variation that help to improve model estimation. The results from this paper confirm the strong presence of Phillips curve in Indonesia. Positive output gap that reflects excess demand condition gives rise to the inflation rates. In addition, the elasticity of output gap is higher if the mining sector is excluded from output gap estimation. In addition to inflation adaptation, the dynamics of exchange rate and international commodity price are also found to affect inflation significantly. The results are robust to the alternative measurement of output gap

Keywords: Phillips curve, inflation, Indonesia, panel data

Procedia PDF Downloads 95
4906 Frequency Analysis of Minimum Ecological Flow and Gage Height in Indus River Using Maximum Likelihood Estimation

Authors: Tasir Khan, Yejuan Wan, Kalim Ullah

Abstract:

Hydrological frequency analysis has been conducted to estimate the minimum flow elevation of the Indus River in Pakistan to protect the ecosystem. The Maximum likelihood estimation (MLE) technique is used to estimate the best-fitted distribution for Minimum Ecological Flows at nine stations of the Indus River in Pakistan. The four selected distributions, Generalized Extreme Value (GEV) distribution, Generalized Logistics (GLO) distribution, Generalized Pareto (GPA) distribution, and Pearson type 3 (PE3) are fitted in all sites, usually used in hydro frequency analysis. Compare the performance of these distributions by using the goodness of fit tests, such as the Kolmogorov Smirnov test, Anderson darling test, and chi-square test. The study concludes that the Maximum Likelihood Estimation (MLE) method recommended that GEV and GPA are the most suitable distributions which can be effectively applied to all the proposed sites. The quantiles are estimated for the return periods from 5 to 1000 years by using MLE, estimations methods. The MLE is the robust method for larger sample sizes. The results of these analyses can be used for water resources research, including water quality management, designing irrigation systems, determining downstream flow requirements for hydropower, and the impact of long-term drought on the country's aquatic system.

Keywords: minimum ecological flow, frequency distribution, indus river, maximum likelihood estimation

Procedia PDF Downloads 53
4905 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model

Authors: Aid Abdelkrim

Abstract:

A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.

Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading

Procedia PDF Downloads 369
4904 Bayesian Network and Feature Selection for Rank Deficient Inverse Problem

Authors: Kyugneun Lee, Ikjin Lee

Abstract:

Parameter estimation with inverse problem often suffers from unfavorable conditions in the real world. Useless data and many input parameters make the problem complicated or insoluble. Data refinement and reformulation of the problem can solve that kind of difficulties. In this research, a method to solve the rank deficient inverse problem is suggested. A multi-physics system which has rank deficiency caused by response correlation is treated. Impeditive information is removed and the problem is reformulated to sequential estimations using Bayesian network (BN) and subset groups. At first, subset grouping of the responses is performed. Feature selection with singular value decomposition (SVD) is used for the grouping. Next, BN inference is used for sequential conditional estimation according to the group hierarchy. Directed acyclic graph (DAG) structure is organized to maximize the estimation ability. Variance ratio of response to noise is used to pairing the estimable parameters by each response.

Keywords: Bayesian network, feature selection, rank deficiency, statistical inverse analysis

Procedia PDF Downloads 284
4903 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 201
4902 An Ab Initio Molecular Orbital Theory and Density Functional Theory Study of Fluorous 1,3-Dion Compounds

Authors: S. Ghammamy, M. Mirzaabdollahiha

Abstract:

Quantum mechanical calculations of energies, geometries, and vibrational wavenumbers of fluorous 1,3-dion compounds are carried out using density functional theory (DFT/B3LYP) method with LANL2DZ basis sets. The calculated HOMO and LUMO energies show that charge transfer occurs in the molecules. The thermodynamic functions of fluorous 1,3-dion compounds have been performed at B3LYP/LANL2DZ basis sets. The theoretical spectrograms for F NMR spectra of fluorous 1,3-dion compounds have also been constructed. The F NMR nuclear shieldings of fluoride ligands in fluorous 1,3-dion compounds have been studied quantum chemical.

Keywords: density function theory, natural bond orbital, HOMO, LOMO, fluorous

Procedia PDF Downloads 360
4901 Vehicular Emission Estimation of Islamabad by Using Copert-5 Model

Authors: Muhammad Jahanzaib, Muhammad Z. A. Khan, Junaid Khayyam

Abstract:

Islamabad is the capital of Pakistan with the population of 1.365 million people and with a vehicular fleet size of 0.75 million. The vehicular fleet size is growing annually by the rate of 11%. Vehicular emissions are major source of Black carbon (BC). In developing countries like Pakistan, most of the vehicles consume conventional fuels like Petrol, Diesel, and CNG. These fuels are the major emitters of pollutants like CO, CO2, NOx, CH4, VOCs, and particulate matter (PM10). Carbon dioxide and methane are the leading contributor to the global warming with a global share of 9-26% and 4-9% respectively. NOx is the precursor of nitrates which ultimately form aerosols that are noxious to human health. In this study, COPERT (Computer program to Calculate Emissions from Road Transport) was used for vehicular emission estimation in Islamabad. COPERT is a windows based program which is developed for the calculation of emissions from the road transport sector. The emissions were calculated for the year of 2016 include pollutants like CO, NOx, VOC, and PM and energy consumption. The different variable was input to the model for emission estimation including meteorological parameters, average vehicular trip length and respective time duration, fleet configuration, activity data, degradation factor, and fuel effect. The estimated emissions for CO, CH4, CO2, NOx, and PM10 were found to be 9814.2, 44.9, 279196.7, 3744.2 and 304.5 tons respectively.

Keywords: COPERT Model, emission estimation, PM10, vehicular emission

Procedia PDF Downloads 228
4900 Multi-Subpopulation Genetic Algorithm with Estimation of Distribution Algorithm for Textile Batch Dyeing Scheduling Problem

Authors: Nhat-To Huynh, Chen-Fu Chien

Abstract:

Textile batch dyeing scheduling problem is complicated which includes batch formation, batch assignment on machines, batch sequencing with sequence-dependent setup time. Most manufacturers schedule their orders manually that are time consuming and inefficient. More power methods are needed to improve the solution. Motivated by the real needs, this study aims to propose approaches in which genetic algorithm is developed with multi-subpopulation and hybridised with estimation of distribution algorithm to solve the constructed problem for minimising the makespan. A heuristic algorithm is designed and embedded into the proposed algorithms to improve the ability to get out of the local optima. In addition, an empirical study is conducted in a textile company in Taiwan to validate the proposed approaches. The results have showed that proposed approaches are more efficient than simulated annealing algorithm.

Keywords: estimation of distribution algorithm, genetic algorithm, multi-subpopulation, scheduling, textile dyeing

Procedia PDF Downloads 273
4899 Investigation and Analysis on Pore Pressure Variation by Sonic Impedance under Influence of Compressional, Shear, and Stonely Waves in High Pressure Zones

Authors: Nouri, K., Ghassem Alaskari, M., K., Amiri Hazaveh, A., Nabi Bidhendi, M.

Abstract:

Pore pressure is one on the key Petrophysical parameter in exploration discussion and survey on hydrocarbon reservoir. Determination of pore pressure in various levels of drilling and integrity of drilling mud and high pressure zones in order to restrict blow-out and following damages are significant. The pore pressure is obtained by seismic and well logging data. In this study the pore pressure and over burden pressure through the matrix stress and Tarzaqi equation and other related formulas are calculated. By making a comparison on variation of density log in over normal pressure zones with change of sonic impedance under influence of compressional, shear, and Stonely waves, the correlation level of sonic impedance with density log is studied. The level of correlation and variation trend is recorded in sonic impedance under influence Stonely wave with density log that key factor in recording of over burden pressure and pore pressure in Tarzaqi equation is high. The transition time is in divert relation with porosity and fluid type in the formation and as a consequence to the pore pressure. The density log is a key factor in determination of pore pressure therefore sonic impedance under Stonley wave is denotes well the identification of high pressure besides other used factors.

Keywords: pore pressure, stonely wave, density log, sonic impedance, high pressure zone

Procedia PDF Downloads 361
4898 A Multi-Stage Learning Framework for Reliable and Cost-Effective Estimation of Vehicle Yaw Angle

Authors: Zhiyong Zheng, Xu Li, Liang Huang, Zhengliang Sun, Jianhua Xu

Abstract:

Yaw angle plays a significant role in many vehicle safety applications, such as collision avoidance and lane-keeping system. Although the estimation of the yaw angle has been extensively studied in existing literature, it is still the main challenge to simultaneously achieve a reliable and cost-effective solution in complex urban environments. This paper proposes a multi-stage learning framework to estimate the yaw angle with a monocular camera, which can deal with the challenge in a more reliable manner. In the first stage, an efficient road detection network is designed to extract the road region, providing a highly reliable reference for the estimation. In the second stage, a variational auto-encoder (VAE) is proposed to learn the distribution patterns of road regions, which is particularly suitable for modeling the changing patterns of yaw angle under different driving maneuvers, and it can inherently enhance the generalization ability. In the last stage, a gated recurrent unit (GRU) network is used to capture the temporal correlations of the learned patterns, which is capable to further improve the estimation accuracy due to the fact that the changes of deflection angle are relatively easier to recognize among continuous frames. Afterward, the yaw angle can be obtained by combining the estimated deflection angle and the road direction stored in a roadway map. Through effective multi-stage learning, the proposed framework presents high reliability while it maintains better accuracy. Road-test experiments with different driving maneuvers were performed in complex urban environments, and the results validate the effectiveness of the proposed framework.

Keywords: gated recurrent unit, multi-stage learning, reliable estimation, variational auto-encoder, yaw angle

Procedia PDF Downloads 107