Search results for: acoustic emission technique
3007 An Approach to Practical Determination of Fair Premium Rates in Crop Hail Insurance Using Short-Term Insurance Data
Authors: Necati Içer
Abstract:
Crop-hail insurance plays a vital role in managing risks and reducing the financial consequences of hail damage on crop production. Predicting insurance premium rates with short-term data is a major difficulty in numerous nations because of the unique characteristics of hailstorms. This study aims to suggest a feasible approach for establishing equitable premium rates in crop-hail insurance for nations with short-term insurance data. The primary goal of the rate-making process is to determine premium rates for high and zero loss costs of villages and enhance their credibility. To do this, a technique was created using the author's practical knowledge of crop-hail insurance. With this approach, the rate-making method was developed using a range of temporal and spatial factor combinations with both hypothetical and real data, including extreme cases. This article aims to show how to incorporate the temporal and spatial elements into determining fair premium rates using short-term insurance data. The article ends with a suggestion on the ultimate premium rates for insurance contracts.Keywords: crop-hail insurance, premium rate, short-term insurance data, spatial and temporal parameters
Procedia PDF Downloads 553006 Effects of Internet Addiction on Students’ Academic Performance among Some Tertiary Institutions in Oyo State, Nigeria
Authors: Mujidat Lola Olugbode
Abstract:
This study investigates the effects of internet addiction on academic performance among students in some tertiary institutions in Oyo State, Nigeria. A descriptive survey research design was adopted for the study. Two research questions and two hypotheses were answered and tested. The population of the study comprised of all students in five tertiary institutions in Oyo State, Nigeria. Simple random sampling technique was used to select 2550 participants (respondents) from the institutions used for the study, this constituted the sample for the study. The instruments used for data collection was a self-constructed questionnaire on Internet Addiction and Students Academic Performance (IAASAP). The reliability coefficient of the instrument was 0.77. Data collected were analyzed using frequency and percentages, Pearson Product Moment Correlation coefficient (PPMCC) and t-test analysis. The results showed that the students in tertiary institutions in Oyo State were occasionally addicted to internet use. The study also revealed a positive correlation between internet addiction and academic performance. The findings also showed that there was significant difference in the internet addiction between male and female Students. Based on the above findings, the researchers recommended among others that government, educators, parents, counselors, teachers should help redirect the internet use toward academics to ensure greater academic performance.Keywords: internet, addiction, internet addiction, academic performance, tertiary institution, students
Procedia PDF Downloads 643005 PEINS: A Generic Compression Scheme Using Probabilistic Encoding and Irrational Number Storage
Authors: P. Jayashree, S. Rajkumar
Abstract:
With social networks and smart devices generating a multitude of data, effective data management is the need of the hour for networks and cloud applications. Some applications need effective storage while some other applications need effective communication over networks and data reduction comes as a handy solution to meet out both requirements. Most of the data compression techniques are based on data statistics and may result in either lossy or lossless data reductions. Though lossy reductions produce better compression ratios compared to lossless methods, many applications require data accuracy and miniature details to be preserved. A variety of data compression algorithms does exist in the literature for different forms of data like text, image, and multimedia data. In the proposed work, a generic progressive compression algorithm, based on probabilistic encoding, called PEINS is projected as an enhancement over irrational number stored coding technique to cater to storage issues of increasing data volumes as a cost effective solution, which also offers data security as a secondary outcome to some extent. The proposed work reveals cost effectiveness in terms of better compression ratio with no deterioration in compression time.Keywords: compression ratio, generic compression, irrational number storage, probabilistic encoding
Procedia PDF Downloads 2953004 Decision Support System in Air Pollution Using Data Mining
Authors: E. Fathallahi Aghdam, V. Hosseini
Abstract:
Environmental pollution is not limited to a specific region or country; that is why sustainable development, as a necessary process for improvement, pays attention to issues such as destruction of natural resources, degradation of biological system, global pollution, and climate change in the world, especially in the developing countries. According to the World Health Organization, as a developing city, Tehran (capital of Iran) is one of the most polluted cities in the world in terms of air pollution. In this study, three pollutants including particulate matter less than 10 microns, nitrogen oxides, and sulfur dioxide were evaluated in Tehran using data mining techniques and through Crisp approach. The data from 21 air pollution measuring stations in different areas of Tehran were collected from 1999 to 2013. Commercial softwares Clementine was selected for this study. Tehran was divided into distinct clusters in terms of the mentioned pollutants using the software. As a data mining technique, clustering is usually used as a prologue for other analyses, therefore, the similarity of clusters was evaluated in this study through analyzing local conditions, traffic behavior, and industrial activities. In fact, the results of this research can support decision-making system, help managers improve the performance and decision making, and assist in urban studies.Keywords: data mining, clustering, air pollution, crisp approach
Procedia PDF Downloads 4283003 The Development of Competency with a Training Curriculum via Electronic Media for Condominium Managers
Authors: Chisakan Papapankiad
Abstract:
The purposes of this research were 1) to study the competency of condominium managers, 2) to create the training curriculum via electronic media for condominium managers, and 3) to evaluate the training curriculum for condominium managers. The research methods included document analysis, interview, questionnaire, and a try-out. A total of 20 experts were selected to collect data by using Delphi technique. The designed curriculum was tried out with 30 condominium managers. The important steps of conducting this research included analyzing and synthesizing, creating interview questions, conducting factor analysis and developing the training curriculum, editing by experts, and trying out with sample groups. The findings revealed that there were five core competencies: leadership, human resources management, management, communication, and self-development. The training curriculum was designed and all the learning materials were put into a CD. The evaluation of the training curriculum was performed by five experts and the training curriculum was found to be cohesive and suitable for use in the real world. Moreover, the findings also revealed three important issues: 1) the competencies of the respondents after the experiment were higher than before the experiment and this had a level of significance of 0.01, 2) the competencies remained with the respondents at least 12 weeks and this also had a level of significance of 0.01, and 3) the overall level of satisfaction from the respondents were 'the highest level'.Keywords: competency training curriculum, condominium managers, electronic media
Procedia PDF Downloads 2863002 Single-Molecule Analysis of Structure and Dynamics in Polymer Materials by Super-Resolution Technique
Authors: Hiroyuki Aoki
Abstract:
The physical properties of polymer materials are dependent on the conformation and molecular motion of a polymer chain. Therefore, the structure and dynamic behavior of the single polymer chain have been the most important concerns in the field of polymer physics. However, it has been impossible to directly observe the conformation of the single polymer chain in a bulk medium. In the current work, the novel techniques to study the conformation and dynamics of a single polymer chain are proposed. Since a fluorescence method is extremely sensitive, the fluorescence microscopy enables the direct detection of a single molecule. However, the structure of the polymer chain as large as 100 nm cannot be resolved by conventional fluorescence methods because of the diffraction limit of light. In order to observe the single chains, we developed the labeling method of polymer materials with a photo-switchable dye and the super-resolution microscopy. The real-space conformational analysis of single polymer chains with the spatial resolution of 15-20 nm was achieved. The super-resolution microscopy enables us to obtain the three-dimensional coordinates; therefore, we succeeded the conformational analysis in three dimensions. The direct observation by the nanometric optical microscopy would reveal the detailed information on the molecular processes in the various polymer systems.Keywords: polymer materials, single molecule, super-resolution techniques, conformation
Procedia PDF Downloads 3063001 Improvement of Analysis Vertical Oil Exploration Wells (Case Study)
Authors: Azza Hashim Abbas, Wan Rosli Wan Suliman
Abstract:
The old school of study, well testing reservoir engineers used the transient pressure analyses to get certain parameters and variable factors on the reservoir's physical properties, such as, (permeability-thickness). Recently, the difficulties facing the newly discovered areas are the convincing fact that the exploration and production (E&p) team should have sufficiently accurate and appropriate data to work with due to different sources of errors. The well-test analyst does the work without going through well-informed and reliable data from colleagues which may consequently cause immense environmental damage and unnecessary financial losses as well as opportunity losses to the project. In 2003, new potential oil field (Moga) face circulation problem well-22 was safely completed. However the high mud density had caused an extensive damage to the nearer well area which also distracted the hypothetical oil rate of flow that was not representive of the real reservoir characteristics This paper presents methods to analyze and interpret the production rate and pressure data of an oil field. Specifically for Well- 22 using the Deconvolution technique to enhance the transient pressure .Applying deconvolution to get the best range of certainty of results needed for the next subsequent operation. The range determined and analysis of skin factor range was reasonable.Keywords: well testing, exploration, deconvolution, skin factor, un certainity
Procedia PDF Downloads 4453000 A Numerical Study for Mixing Depth and Applicability of Partial Cement Mixing Method Utilizing Geogrid and Fixing Unit
Authors: Woo-seok Choi, Eun-sup Kim, Nam-Seo Park
Abstract:
The demand for new technique in soft ground improvement continuously increases as general soft ground methods like PBD and DCM have a application problem in soft grounds with deep depth and wide distribution in Southern coast of Korea and Southeast. In this study, partial cement mixing method utilizing geogrid and fixing unit(CMG) is suggested and Finite element analysis is performed for analyzing the depth of surface soil and deep soil stabilization and comparing with DCM method. In the result of the experiment, the displacement in DCM method were lower than the displacement in CMG, it's because the upper load is transferred to deep part soil not treated by cement in CMG method case. The differential settlement in DCM method was higher than the differential settlement in CMG, because of the effect load transfer effect by surface part soil treated by cement and geogrid. In conclusion, CMG method has the advantage of economics and constructability in embankment road, railway, etc in which differential settlement is the important consideration.Keywords: soft ground, geogrid, fixing unit, partial cement mixing, finite element analysis
Procedia PDF Downloads 3782999 Cellulose Acetate/Polyacrylic Acid Filled with Nano-Hydroxapatite Composites: Spectroscopic Studies and Search for Biomedical Applications
Authors: E. M. AbdelRazek, G. S. ElBahy, M. A. Allam, A. M. Abdelghany, A. M. Hezma
Abstract:
Polymeric biocomposite of hydroxyapatite/polyacrylic acid were prepared and their thermal and mechanical properties were improved by addition of cellulose acetate. FTIR spectroscopy technique and X-ray diffraction analysis were employed to examine the physical and chemical characteristics of the biocomposites. Scanning electron microscopy shows a uniform distribution of HAp nano-particles through the polymeric matrix of two organic/inorganic composites weight ratios (60/40 and 70/30), at which the material crystallinity reaches a considerable value appropriate for the needed applications were studied and revealed that the HAp nano-particles are uniformly distributed in the polymeric matrix. Kinetic parameters were determined from the weight loss data using non isothermal thermogravimetric analysis (TGA). Also, the main degradation steps were described and discussed. The mechanical properties of composites were evaluated by measuring tensile strength and elastic modulus. The data indicate that the addition of cellulose acetate can make homogeneous composites scaffold significantly resistant to higher stress. Elastic modulus of the composites was also improved by the addition of cellulose acetate, making them more appropriate for bioapplications.Keywords: biocomposite, chemical synthesis, infrared spectroscopy, mechanical properties
Procedia PDF Downloads 4572998 Improving Axial-Attention Network via Cross-Channel Weight Sharing
Authors: Nazmul Shahadat, Anthony S. Maida
Abstract:
In recent years, hypercomplex inspired neural networks improved deep CNN architectures due to their ability to share weights across input channels and thus improve cohesiveness of representations within the layers. The work described herein studies the effect of replacing existing layers in an Axial Attention ResNet with their quaternion variants that use cross-channel weight sharing to assess the effect on image classification. We expect the quaternion enhancements to produce improved feature maps with more interlinked representations. We experiment with the stem of the network, the bottleneck layer, and the fully connected backend by replacing them with quaternion versions. These modifications lead to novel architectures which yield improved accuracy performance on the ImageNet300k classification dataset. Our baseline networks for comparison were the original real-valued ResNet, the original quaternion-valued ResNet, and the Axial Attention ResNet. Since improvement was observed regardless of which part of the network was modified, there is a promise that this technique may be generally useful in improving classification accuracy for a large class of networks.Keywords: axial attention, representational networks, weight sharing, cross-channel correlations, quaternion-enhanced axial attention, deep networks
Procedia PDF Downloads 832997 Prevalence of Suicidal Behavioral Experiences in the Tertiary Institution: Implication for Childhood Development
Authors: Moses Onyemaechi Ede, Chinedu Ifedi Okeke
Abstract:
This study examined the prevalence of suicidal behavioural experience in a tertiary institution and its implication for childhood development. In pursuance of the objectives, two specific purposes, two research questions, and two null hypotheses guided this study. This is a descriptive design that utilized university student populations (N= 36,000 students) in the University of Nigeria Nsukka. The sample of the study was made up of 100 students. An accidental sampling technique was used to arrive at the sample. A self-developed questionnaire titled Suicidal Behaviour Questionnaire (SBQ) was used for this study. The data collected was analyzed using mean and percentages. The result showed that university students do not experience suicidal behaviours. It also showed that suicidal experiences are not prevalent. There is no significant influence of gender on the responses of male and female tertiary institution students based on their suicidal behavioural experiences. There is no significant influence of gender on the mean responses of male and female tertiary institution students on the prevalence of suicidal experiences. Based on the findings, it is recommended that there should be the teaching of suicide education and prevention in schools as well as mounting of bulletins on suicidology by the Guidance Counsellors.Keywords: suicide, behavioural experiences, tertiary institution, childhood development
Procedia PDF Downloads 1372996 Damage Identification Using Experimental Modal Analysis
Authors: Niladri Sekhar Barma, Satish Dhandole
Abstract:
Damage identification in the context of safety, nowadays, has become a fundamental research interest area in the field of mechanical, civil, and aerospace engineering structures. The following research is aimed to identify damage in a mechanical beam structure and quantify the severity or extent of damage in terms of loss of stiffness, and obtain an updated analytical Finite Element (FE) model. An FE model is used for analysis, and the location of damage for single and multiple damage cases is identified numerically using the modal strain energy method and mode shape curvature method. Experimental data has been acquired with the help of an accelerometer. Fast Fourier Transform (FFT) algorithm is applied to the measured signal, and subsequently, post-processing is done in MEscopeVes software. The two sets of data, the numerical FE model and experimental results, are compared to locate the damage accurately. The extent of the damage is identified via modal frequencies using a mixed numerical-experimental technique. Mode shape comparison is performed by Modal Assurance Criteria (MAC). The analytical FE model is adjusted by the direct method of model updating. The same study has been extended to some real-life structures such as plate and GARTEUR structures.Keywords: damage identification, damage quantification, damage detection using modal analysis, structural damage identification
Procedia PDF Downloads 1162995 Francophone University Students' Attitudes Towards English Accents in Cameroon
Authors: Eric Agrie Ambele
Abstract:
The norms and models for learning pronunciation in relation to the teaching and learning of English pronunciation are key issues nowadays in English Language Teaching in ESL contexts. This paper discusses these issues based on a study on the attitudes of some Francophone university students in Cameroon towards three English accents spoken in Cameroon: Cameroon Francophone English (CamFE), Cameroon English (CamE), and Hyperlectal Cameroon English (near standard British English). With the desire to know more about the treatment that these English accents receive among these students, an aspect that had hitherto received little attention in the literature, a language attitude questionnaire, and the matched-guise technique was used to investigate this phenomenon. Two methods of data analysis were employed: (1) the percentage count procedure, and (2) the semantic differential scale. The findings reveal that the participants’ attitudes towards the selected accents vary in degree. Though Hyperlectal CamE emerged first, CamE second and CamFE third, no accent, on average, received a negative evaluation. It can be deduced from this findings that, first, CamE is gaining more and more recognition and can stand as an autonomous accent; second, that the participants all rated Hyperlectal CamE higher than CamE implies that they would be less motivated in a context where CamE is the learning model. By implication, in the teaching of English pronunciation to francophone learners learning English in Cameroon, Hyperlectal Cameroon English should be the model.Keywords: teaching pronunciation, English accents, Francophone learners, attitudes
Procedia PDF Downloads 1972994 A Comprehensive Safety Analysis for a Pressurized Water Reactor Fueled with Mixed-Oxide Fuel as an Accident Tolerant Fuel
Authors: Mohamed Y. M. Mohsen
Abstract:
The viability of utilising mixed-oxide fuel (MOX) ((U₀.₉, rgPu₀.₁) O₂) as an accident-tolerant fuel (ATF) has been thoroughly investigated. MOX fuel provides the best example of a nuclear waste recycling process. The MCNPX 2.7 code was used to determine the main neutronic features, especially the radial power distribution, to identify the hot channel on which the thermal-hydraulic (TH) study was performed. Based on the computational fluid dynamics technique, the simulation of the rod-centered thermal-hydraulic subchannel model was implemented using COMSOL Multiphysics. TH analysis was utilised to determine the axially and radially distributed temperatures of the fuel and cladding materials, as well as the departure from the nucleate boiling ratio (DNBR) along the coolant channel. COMSOL Multiphysics can simulate reality by coupling multiphysics, such as coupling between heat transfer and solid mechanics. The main solid structure parameters, such as the von Mises stress, volumetric strain, and displacement, were simulated using this coupling. When the neutronic, TH, and solid structure performances of UO₂ and ((U₀.₉, rgPu₀.₁) O₂) were compared, the results showed considerable improvement and an increase in safety margins with the use of ((U₀.₉, rgPu₀.₁) O₂).Keywords: mixed-oxide, MCNPX, neutronic analysis, COMSOL-multiphysics, thermal-hydraulic, solid structure
Procedia PDF Downloads 1062993 Failure and Stress Analysis of Super Heater Tubes of a 67 TPH Coke Dry Quenching Boiler
Authors: Subodh N. Patel, Abhijit Pusty, Manashi Adhikary, Sandip Bhattacharyya
Abstract:
The steam superheater (SH) is a coil type heat exchanger which is used to produce superheated steam or to convert the wet steam to dry steam (69.6 kg/cm² and 495°C), generated by a boiler. There were two superheaters in the system, SH I and SH II. SH II is a set of tubes that faces the initial interaction with flue gas at high temperature followed by SH I tubes. After a service life of 2100 hours, a tube in the SH II found to be punctured. Dye penetrant test revealed that out of 50 such tubes, 14 more tubes had severe cracks at a similar location. The failure was investigated in detail. The materials and scale were characterized by optical microscope and advance characterization technique. Scale, observed on fracture surface, was characterized under scanning electron microscope and Raman spectroscopy. Stresses acting on the tubes in working condition were analyzed by finite element method software, ANSYS. Cyclic stresses were observed in the simulation at the same prone location due to restriction in expansion of tubes. Based on scale characterization and stress analysis, it was concluded that the tube failed in thermo-mechanical fatigue. Finally, prevention and control measures were taken to avoid such failure in the future.Keywords: finite element analysis, oxide scale, superheater tube, thermomechanical fatigue
Procedia PDF Downloads 1172992 LES Simulation of a Thermal Plasma Jet with Modeled Anode Arc Attachment Effects
Authors: N. Agon, T. Kavka, J. Vierendeels, M. Hrabovský, G. Van Oost
Abstract:
A plasma jet model was developed with a rigorous method for calculating the thermophysical properties of the gas mixture without mixing rules. A simplified model approach to account for the anode effects was incorporated in this model to allow the valorization of the simulations with experimental results. The radial heat transfer was under-predicted by the model because of the limitations of the radiation model, but the calculated evolution of centerline temperature, velocity and gas composition downstream of the torch exit corresponded well with the measured values. The CFD modeling of thermal plasmas is either focused on development of the plasma arc or the flow of the plasma jet outside of the plasma torch. In the former case, the Maxwell equations are coupled with the Navier-Stokes equations to account for electromagnetic effects which control the movements of the anode arc attachment. In plasma jet simulations, however, the computational domain starts from the exit nozzle of the plasma torch and the influence of the arc attachment fluctuations on the plasma jet flow field is not included in the calculations. In that case, the thermal plasma flow is described by temperature, velocity and concentration profiles at the torch exit nozzle and no electromagnetic effects are taken into account. This simplified approach is widely used in literature and generally acceptable for plasma torches with a circular anode inside the torch chamber. The unique DC hybrid water/gas-stabilized plasma torch developed at the Institute of Plasma Physics of the Czech Academy of Sciences on the other hand, consists of a rotating anode disk, located outside of the torch chamber. Neglecting the effects of the anode arc attachment downstream of the torch exit nozzle leads to erroneous predictions of the flow field. With the simplified approach introduced in this model, the Joule heating between the exit nozzle and the anode attachment position of the plasma arc is modeled by a volume heat source and the jet deflection caused by the anode processes by a momentum source at the anode surface. Furthermore, radiation effects are included by the net emission coefficient (NEC) method and diffusion is modeled with the combined diffusion coefficient method. The time-averaged simulation results are compared with numerous experimental measurements. The radial temperature profiles were obtained by spectroscopic measurements at different axial positions downstream of the exit nozzle. The velocity profiles were evaluated from the time-dependent evolution of flow structures, recorded by photodiode arrays. The shape of the plasma jet was compared with charge-coupled device (CCD) camera pictures. In the cooler regions, the temperature was measured by enthalpy probe downstream of the exit nozzle and by thermocouples in radial direction around the torch nozzle. The model results correspond well with the experimental measurements. The decrease in centerline temperature and velocity is predicted within an acceptable range and the shape of the jet closely resembles the jet structure in the recorded images. The temperatures at the edge of the jet are underestimated due to the absence of radial radiative heat transfer in the model.Keywords: anode arc attachment, CFD modeling, experimental comparison, thermal plasma jet
Procedia PDF Downloads 3672991 Mechanical Investigation Approach to Optimize the High-Velocity Oxygen Fuel Fe-Based Amorphous Coatings Reinforced by B4C Nanoparticles
Authors: Behrooz Movahedi
Abstract:
Fe-based amorphous feedstock powders are used as the matrix into which various ratios of hard B4C nanoparticles (0, 5, 10, 15, 20 vol.%) as reinforcing agents were prepared using a planetary high-energy mechanical milling. The ball-milled nanocomposite feedstock powders were also sprayed by means of high-velocity oxygen fuel (HVOF) technique. The characteristics of the powder particles and the prepared coating depending on their microstructures and nanohardness were examined in detail using nanoindentation tester. The results showed that the formation of the Fe-based amorphous phase was noticed over the course of high-energy ball milling. It is interesting to note that the nanocomposite coating is divided into two regions, namely, a full amorphous phase region and homogeneous dispersion of B4C nanoparticles with a scale of 10–50 nm in a residual amorphous matrix. As the B4C content increases, the nanohardness of the composite coatings increases, but the fracture toughness begins to decrease at the B4C content higher than 20 vol.%. The optimal mechanical properties are obtained with 15 vol.% B4C due to the suitable content and uniform distribution of nanoparticles. Consequently, the changes in mechanical properties of the coatings were attributed to the changes in the brittle to ductile transition by adding B4C nanoparticles.Keywords: Fe-based amorphous, B₄C nanoparticles, nanocomposite coating, HVOF
Procedia PDF Downloads 1352990 Synthesis and Characterization of SnO2: Ti Thin Films Spray-Deposited on Optical Glass
Authors: Demet Tatar, Bahattin Düzgün
Abstract:
In this study, we have newly developed titanium-tin oxide (TiSnO) thin films as the transparent conducting oxides materials by the spray pyrolysis technique. Tin oxide thin films doped with different Ti content were successfully grown by spray pyrolysis and they were characterized as a function of Ti content. The effect of Ti contents on the crystalline structure and optical properties of the as-deposited SnO2:Ti films was systematically investigated by X-ray diffraction (XRD), scanning electronic microscopy (SEM), atomic force microscopy (AFM), UV-vis spectrometer and photoluminecenc spectrophotometer. The X-ray diffraction patterns taken at room temperature showed that the films are polycrystalline. The preferred directions of crystal growth appeared in the difractogram of SnO2: Ti (TiTO) films were correspond to the reflections from the (110), (200), (211) and (301) planes. The grain size varies from 21.8 to 27.8 nm for (110) preferred plane. SEM and AFM study reveals the surface of TiTO to be made of nanocrystalline particles. The highest visible transmittance (570 nm) of the deposited films is 80 % for 20 wt % titanium doped tin oxide films. The obtained results revealed that the structures and optical properties of the films were greatly affected by doping levels. These films are useful as conducting layers in electro chromic and photovoltaic devices.Keywords: transparent conducting oxide, gas sensors, SnO2, Ti, optoelectronic, spray pyrolysis
Procedia PDF Downloads 3852989 A Review on Existing Challenges of Data Mining and Future Research Perspectives
Authors: Hema Bhardwaj, D. Srinivasa Rao
Abstract:
Technology for analysing, processing, and extracting meaningful data from enormous and complicated datasets can be termed as "big data." The technique of big data mining and big data analysis is extremely helpful for business movements such as making decisions, building organisational plans, researching the market efficiently, improving sales, etc., because typical management tools cannot handle such complicated datasets. Special computational and statistical issues, such as measurement errors, noise accumulation, spurious correlation, and storage and scalability limitations, are brought on by big data. These unique problems call for new computational and statistical paradigms. This research paper offers an overview of the literature on big data mining, its process, along with problems and difficulties, with a focus on the unique characteristics of big data. Organizations have several difficulties when undertaking data mining, which has an impact on their decision-making. Every day, terabytes of data are produced, yet only around 1% of that data is really analyzed. The idea of the mining and analysis of data and knowledge discovery techniques that have recently been created with practical application systems is presented in this study. This article's conclusion also includes a list of issues and difficulties for further research in the area. The report discusses the management's main big data and data mining challenges.Keywords: big data, data mining, data analysis, knowledge discovery techniques, data mining challenges
Procedia PDF Downloads 1102988 Prioritization in Modern Portfolio Management - An Action Design Research Approach to Method Development for Scaled Agility
Authors: Jan-Philipp Schiele, Karsten Schlinkmeier
Abstract:
Allocation of scarce resources is a core process of traditional project portfolio management. However, with the popularity of agile methodology, established concepts and methods of portfolio management are reaching their limits and need to be adapted. Consequently, the question arises of how the process of resource allocation can be managed appropriately in scaled agile environments. The prevailing framework SAFe offers Weightest Shortest Job First (WSJF) as a prioritization technique, butestablished companies are still looking for methodical adaptions to apply WSJF for prioritization in portfolios in a more goal-oriented way and aligned for their needs in practice. In this paper, the relevant problem of prioritization in portfolios is conceptualized from the perspective of coordination and related mechanisms to support resource allocation. Further, an Action Design Research (ADR) project with case studies in a finance company is outlined to develop a practically applicable yet scientifically sound prioritization method based on coordination theory. The ADR project will be flanked by consortium research with various practitioners from the financial and insurance industry. Preliminary design requirements indicate that the use of a feedback loop leads to better team and executive level coordination in the prioritization process.Keywords: scaled agility, portfolio management, prioritization, business-IT alignment
Procedia PDF Downloads 1962987 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes
Authors: J. J. Vargas, N. Prieto, L. A. Toro
Abstract:
Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method
Procedia PDF Downloads 3742986 Development and Evaluation of Economical Self-cleaning Cement
Authors: Anil Saini, Jatinder Kumar Ratan
Abstract:
Now a day, the key issue for the scientific community is to devise the innovative technologies for sustainable control of urban pollution. In urban cities, a large surface area of the masonry structures, buildings, and pavements is exposed to the open environment, which may be utilized for the control of air pollution, if it is built from the photocatalytically active cement-based constructional materials such as concrete, mortars, paints, and blocks, etc. The photocatalytically active cement is formulated by incorporating a photocatalyst in the cement matrix, and such cement is generally known as self-cleaning cement In the literature, self-cleaning cement has been synthesized by incorporating nanosized-TiO₂ (n-TiO₂) as a photocatalyst in the formulation of the cement. However, the utilization of n-TiO₂ for the formulation of self-cleaning cement has the drawbacks of nano-toxicity, higher cost, and agglomeration as far as the commercial production and applications are concerned. The use of microsized-TiO₂ (m-TiO₂) in place of n-TiO₂ for the commercial manufacture of self-cleaning cement could avoid the above-mentioned problems. However, m-TiO₂ is less photocatalytically active as compared to n- TiO₂ due to smaller surface area, higher band gap, and increased recombination rate. As such, the use of m-TiO₂ in the formulation of self-cleaning cement may lead to a reduction in photocatalytic activity, thus, reducing the self-cleaning, depolluting, and antimicrobial abilities of the resultant cement material. So improvement in the photoactivity of m-TiO₂ based self-cleaning cement is the key issue for its practical applications in the present scenario. The current work proposes the use of surface-fluorinated m-TiO₂ for the formulation of self-cleaning cement to enhance its photocatalytic activity. The calcined dolomite, a constructional material, has also been utilized as co-adsorbent along with the surface-fluorinated m-TiO₂ in the formulation of self-cleaning cement to enhance the photocatalytic performance. The surface-fluorinated m-TiO₂, calcined dolomite, and the formulated self-cleaning cement were characterized using diffuse reflectance spectroscopy (DRS), X-ray diffraction analysis (XRD), field emission-scanning electron microscopy (FE-SEM), energy dispersive x-ray spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), BET (Brunauer–Emmett–Teller) surface area, and energy dispersive X-ray fluorescence spectrometry (EDXRF). The self-cleaning property of the as-prepared self-cleaning cement was evaluated using the methylene blue (MB) test. The depolluting ability of the formulated self-cleaning cement was assessed through a continuous NOX removal test. The antimicrobial activity of the self-cleaning cement was appraised using the method of the zone of inhibition. The as-prepared self-cleaning cement obtained by uniform mixing of 87% clinker, 10% calcined dolomite, and 3% surface-fluorinated m-TiO₂ showed a remarkable self-cleaning property by providing 53.9% degradation of the coated MB dye. The self-cleaning cement also depicted a noteworthy depolluting ability by removing 5.5% of NOx from the air. The inactivation of B. subtiltis bacteria in the presence of light confirmed the significant antimicrobial property of the formulated self-cleaning cement. The self-cleaning, depolluting, and antimicrobial results are attributed to the synergetic effect of surface-fluorinated m-TiO₂ and calcined dolomite in the cement matrix. The present study opens an idea and route for further research for acile and economical formulation of self-cleaning cement.Keywords: microsized-titanium dioxide (m-TiO₂), self-cleaning cement, photocatalysis, surface-fluorination
Procedia PDF Downloads 1702985 Empirical Investigation of Bullwhip Effect with Sensitivity Analysis in Supply Chain
Authors: Shoaib Yousaf
Abstract:
The main purpose of this research is to the empirical investigation of the bullwhip effect under sensitivity analysis in the two-tier supply chain. The simulation modeling technique has been applied in this research as a research methodology to see the sensitivity analysis of the bullwhip effect in the rice industry of Pakistan. The research comprises two case studies that have been chosen as a sample. The results of this research have confirmed that reduction in production delay reduces the bullwhip effect, which conforms to the time compressing paradigm and the significance of the reduction in production delay to lessen demand amplification. The result of this research also indicates that by increasing the value of time to adjust inventory decreases the bullwhip effect. Furthermore, by decreasing the value of alpha increases the damping effect of the exponential smoother, it is not surprising that it also reduces the bullwhip effect. Moreover, by reducing the value of time to work in progress also reduces the bullwhip effect. This research will help practitioners and operation managers to reduces the major costs of their products in three ways. They can reduce their i) inventory levels, ii) better utilize their capacity and iii) improve their forecasting techniques. However, this study is based on two tier supply chain, while in reality the supply chain has got many tiers. Hence, future work will be extended across more than two-tier supply chains.Keywords: bullwhip effect, rice industry, supply chain dynamics, simulation, sensitivity analysis
Procedia PDF Downloads 1442984 Simulation of Particle Damping in Boring Tool Using Combined Particles
Authors: S. Chockalingam, U. Natarajan, D. M. Santhoshsarang
Abstract:
Particle damping is a promising vibration attenuating technique in boring tool than other type of damping with minimal effect on the strength, rigidity and stiffness ratio of the machine tool structure. Due to the cantilever nature of boring tool holder in operations, it suffers chatter when the slenderness ratio of the tool gets increased. In this study, Copper-Stainless steel (SS) particles were packed inside the boring tool which acts as a damper. Damper suppresses chatter generated during machining and also improves the machining efficiency of the tool with better slenderness ratio. In the first approach of particle damping, combined Cu-SS particles were packed inside the vibrating tool, whereas Copper and Stainless steel particles were selected separately and packed inside another tool and their effectiveness was analysed in this simulation. This study reveals that the efficiency of finite element simulation of the boring tools when equipped with particles such as copper, stainless steel and a combination of both. In this study, the newly modified boring tool holder with particle damping was simulated using ANSYS12.0 with and without particles. The aim of this study is to enhance the structural rigidity through particle damping thus avoiding the occurrence of resonance in the boring tool during machining.Keywords: boring bar, copper-stainless steel, chatter, particle damping
Procedia PDF Downloads 4612983 Intelligent Control of Doubly Fed Induction Generator Wind Turbine for Smart Grid
Authors: Amal A. Hassan, Faten H. Fahmy, Abd El-Shafy A. Nafeh, Hosam K. M. Youssef
Abstract:
Due to the growing penetration of wind energy into the power grid, it is very important to study its interactions with the power system and to provide good control technique in order to deliver high quality power. In this paper, an intelligent control methodology is proposed for optimizing the controllers’ parameters of doubly fed induction generator (DFIG) based wind turbine generation system (WTGS). The genetic algorithm (GA) and particle swarm optimization (PSO) are employed and compared for the parameters adaptive tuning of the proposed proportional integral (PI) multiple controllers of the back to back converters of the DFIG based WTGS. For this purpose, the dynamic model of WTGS with DFIG and its associated controllers is presented. Furthermore, the simulation of the system is performed using MATLAB/SIMULINK and SIMPOWERSYSTEM toolbox to illustrate the performance of the optimized controllers. Finally, this work is validated to 33-bus test radial system to show the interaction between wind distributed generation (DG) systems and the distribution network.Keywords: DFIG wind turine, intelligent control, distributed generation, particle swarm optimization, genetic algorithm
Procedia PDF Downloads 2682982 Evaluation of Deteriorated Fired Clay Bricks Based on Schmidt Hammer Tests
Authors: Laurent Debailleux
Abstract:
Although past research has focused on parameters influencing the vulnerability of brick and its decay, in practice ancient fired clay bricks are usually replaced without any particular assessment of their characteristics. This paper presents results of non-destructive Schmidt hammer tests performed on ancient fired clay bricks sampled from historic masonry. Samples under study were manufactured between the 18th and 20th century and came from facades and interior walls. Tests were performed on three distinct brick surfaces, depending on their position within the masonry unit. Schmidt hammer tests were carried out in order to measure the mean rebound value (Rn), which refers to the resistance of the surface to successive impacts of the hammer plunger tip. Results indicate that rebound values increased with successive impacts at the same point. Therefore, mean Schmidt hammer rebound values (Rn), limited to the first impact on a surface minimises the estimation of compressive strength. In addition, the results illustrate that this technique is sensitive enough to measure weathering differences, even for different surfaces of a particular sample. Finally, the paper also highlights the relevance of considering the position of the brick within the masonry when conducting particular assessments of the material’s strength.Keywords: brick, non-destructive tests, rebound number, Schmidt hammer, weathering grade
Procedia PDF Downloads 1612981 Influence of Silicon Carbide Particle Size and Thermo-Mechanical Processing on Dimensional Stability of Al 2124SiC Nanocomposite
Authors: Mohamed M. Emara, Heba Ashraf
Abstract:
This study is to investigation the effect of silicon carbide (SiC) particle size and thermo-mechanical processing on dimensional stability of aluminum alloy 2124. Three combinations of SiC weight fractions are investigated, 2.5, 5, and 10 wt. % with different SiC particle sizes (25 μm, 5 μm, and 100nm) were produced using mechanical ball mill. The standard testing samples were fabricated using powder metallurgy technique. Both samples, prior and after extrusion, were heated from room temperature up to 400ºC in a dilatometer at different heating rates, that is, 10, 20, and 40ºC/min. The analysis showed that for all materials, there was an increase in length change as temperature increased and the temperature sensitivity of aluminum alloy decreased in the presence of both micro and nano-sized silicon carbide. For all conditions, nanocomposites showed better dimensional stability compared to conventional Al 2124/SiC composites. The after extrusion samples showed better thermal stability and less temperature sensitivity for the aluminum alloy for both micro and nano-sized silicon carbide.Keywords: aluminum 2124 metal matrix composite, SiC nano-sized reinforcements, powder metallurgy, extrusion mechanical ball mill, dimensional stability
Procedia PDF Downloads 5262980 A Model for Diagnosis and Prediction of Coronavirus Using Neural Network
Authors: Sajjad Baghernezhad
Abstract:
Meta-heuristic and hybrid algorithms have high adeer in modeling medical problems. In this study, a neural network was used to predict covid-19 among high-risk and low-risk patients. This study was conducted to collect the applied method and its target population consisting of 550 high-risk and low-risk patients from the Kerman University of medical sciences medical center to predict the coronavirus. In this study, the memetic algorithm, which is a combination of a genetic algorithm and a local search algorithm, has been used to update the weights of the neural network and develop the accuracy of the neural network. The initial study showed that the accuracy of the neural network was 88%. After updating the weights, the memetic algorithm increased by 93%. For the proposed model, sensitivity, specificity, positive predictivity value, value/accuracy to 97.4, 92.3, 95.8, 96.2, and 0.918, respectively; for the genetic algorithm model, 87.05, 9.20 7, 89.45, 97.30 and 0.967 and for logistic regression model were 87.40, 95.20, 93.79, 0.87 and 0.916. Based on the findings of this study, neural network models have a lower error rate in the diagnosis of patients based on individual variables and vital signs compared to the regression model. The findings of this study can help planners and health care providers in signing programs and early diagnosis of COVID-19 or Corona.Keywords: COVID-19, decision support technique, neural network, genetic algorithm, memetic algorithm
Procedia PDF Downloads 672979 A Hybrid Feature Selection Algorithm with Neural Network for Software Fault Prediction
Authors: Khalaf Khatatneh, Nabeel Al-Milli, Amjad Hudaib, Monther Ali Tarawneh
Abstract:
Software fault prediction identify potential faults in software modules during the development process. In this paper, we present a novel approach for software fault prediction by combining a feedforward neural network with particle swarm optimization (PSO). The PSO algorithm is employed as a feature selection technique to identify the most relevant metrics as inputs to the neural network. Which enhances the quality of feature selection and subsequently improves the performance of the neural network model. Through comprehensive experiments on software fault prediction datasets, the proposed hybrid approach achieves better results, outperforming traditional classification methods. The integration of PSO-based feature selection with the neural network enables the identification of critical metrics that provide more accurate fault prediction. Results shows the effectiveness of the proposed approach and its potential for reducing development costs and effort by detecting faults early in the software development lifecycle. Further research and validation on diverse datasets will help solidify the practical applicability of the new approach in real-world software engineering scenarios.Keywords: feature selection, neural network, particle swarm optimization, software fault prediction
Procedia PDF Downloads 952978 Adjunct Placement in Educated Nigerian English
Authors: Juliet Charles Udoudom
Abstract:
In nonnative language use environments, language users have been known to demonstrate marked variations both in the spoken and written productions of the target language. For instance, analyses of the written productions of Nigerian users of English have shown inappropriate sequencing of sentence elements resulting in distortions in meaning and/or other problems of syntax. This study analyses the structure of sentences in the written production of 450 educated Nigerian users of English to establish their sensitivity to adjunct placement and the extent to which it exerts on meaning interpretation. The respondents were selected by a stratified random sampling technique from six universities in south-south Nigeria using education as the main yardstick for stratification. The systemic functional grammar analytic format was used in analyzing the sentences selected from the corpus. Findings from the analyses indicate that of the 8,576 tokens of adjuncts in the entire corpus, 4,550 (53.05%) of circumstantial adjuncts were appropriately placed while 2,839 (33.11%) of modal adjuncts occurred at appropriate locations in the clauses analyzed. Conjunctive adjunct placement accounted for 1,187 occurrences, representing 13.84% of the entire corpus. Further findings revealed that prepositional phrases (PPs) were not well construed by respondents to be capable of realizing adjunct functions, and were inappropriately placed.Keywords: adjunct, adjunct placement, conjunctive adjunct, circumstantial adjunct, systemic grammar
Procedia PDF Downloads 17