Search results for: predictive performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13287

Search results for: predictive performance

7347 Validation of Two Field Base Dynamic Balance Tests in the Activation of Selected Hip and Knee Stabilizer Muscles

Authors: Mariam A. Abu-Alim

Abstract:

The purpose of this study was to validate muscle activation amplitudes of two field base dynamic balance tests that are used as strengthen and motor control exercises too in the activation of selected hip and knee stabilizer muscles. Methods: Eighteen college-age females students (21±2 years; 65.6± 8.7 kg; 169.7±8.1 cm) who participated at least for 30 minutes in physical activity most days of the week volunteered. The wireless BIOPAC (MP150, BIOPAC System. Inc, California, USA) surface electromyography system was used to validate the activation of the Gluteus Medius and the Adductor Magnus of hip stabilizer muscles; and the Hamstrings, Quadriceps, and the Gastrocnemius of the knee stabilizer muscles. Surface electrodes (EL 503, BIOPAC, System. Inc) connected to dual wireless EMG BioNormadix Transmitters were place on selected muscles of participants dominate side. Manual muscle testing was performed to obtain the maximal voluntary isometric contraction (MVIC) in which all collected muscle activity data during the three reaching direction: anterior, posteromedial, posterolateral of the Star Excursion Balance Test (SEBT) and the Y-balance Test (YBT) data could be normalized. All participants performed three trials for each reaching direction of the SEBT and the YBT. The domanial leg trial for each participant was selected for analysis which was also the standing leg. Results: the selected hip stabilizer muscles (Gluteus Medius, Adductor Magnus) were both greater than 100%MVIC during the performance of the SEBT and in all three directions. Whereas, selected knee stabilizer muscles had greater activation 0f 100% MVIC and were significantly more activated during the performance of the YBT test in all three reaching directions. The results showed that the posterolateral and the postmedial reaching directions for both dynamic balance tests had greater activation levels and greater than 200%MVIC for all tested muscles expect of the hamstrings. Conclusion: the results of this study showed that the SEBT and the YBT had validated high levels of muscular activity for the hip and the knee stabilizer muscles; which can be used to represent the improvement, strength, control and the decreasing in the injury levels. Since these selected hip and knee stabilizer muscles, represent 35% of all athletic injuries depending on the type of sport.

Keywords: dynamic balance tests, electromyography, hip stabilizer muscles, nee stabilizer muscles

Procedia PDF Downloads 144
7346 An Approach on the Design of a Solar Cell Characterization Device

Authors: Christoph Mayer, Dominik Holzmann

Abstract:

This paper presents the development of a compact, portable and easy to handle solar cell characterization device. The presented device reduces the effort and cost of single solar cell characterization to a minimum. It enables realistic characterization of cells under sunlight within minutes. In the field of photovoltaic research the common way to characterize a single solar cell or a module is, to measure the current voltage curve. With this characteristic the performance and the degradation rate can be defined which are important for the consumer or developer. The paper consists of the system design description, a summary of the measurement results and an outline for further developments.

Keywords: solar cell, photovoltaics, PV, characterization

Procedia PDF Downloads 413
7345 Duration of Isolated Vowels in Infants with Cochlear Implants

Authors: Paris Binos

Abstract:

The present work investigates developmental aspects of the duration of isolated vowels in infants with normal hearing compared to those who received cochlear implants (CIs) before two years of age. Infants with normal hearing produced shorter vowel duration since this find related with more mature production abilities. First isolated vowels are transparent during the protophonic stage as evidence of an increased motor and linguistic control. Vowel duration is a crucial factor for the transition of prelexical speech to normal adult speech. Despite current knowledge of data for infants with normal hearing more research is needed to unravel productions skills in early implanted children. Thus, isolated vowel productions by two congenitally hearing-impaired Greek infants (implantation ages 1:4-1:11; post-implant ages 0:6-1:3) were recorded and sampled for six months after implantation with a Nucleus-24. The results compared with the productions of three normal hearing infants (chronological ages 0:8-1:1). Vegetative data and vocalizations masked by external noise or sounds were excluded. Participants had no other disabilities and had unknown deafness etiology. Prior to implantation the infants had an average unaided hearing loss of 95-110 dB HL while the post-implantation PTA decreased to 10-38 dB HL. The current research offers a methodology for the processing of the prelinguistic productions based on a combination of acoustical and auditory analyses. Based on the current methodological framework, duration measured through spectrograms based on wideband analysis, from the voicing onset to the end of the vowel. The end marked by two co-occurring events: 1) The onset of aperiodicity with a rapid change in amplitude in the waveform and 2) a loss in formant’s energy. Cut-off levels of significance were set at 0.05 for all tests. Bonferroni post hoc tests indicated that difference was significant between the mean duration of vowels of infants wearing CIs and their normal hearing peers. Thus, the mean vowel duration of CIs measured longer compared to the normal hearing peers (0.000). The current longitudinal findings contribute to the existing data for the performance of children wearing CIs at a very young age and enrich also the data of the Greek language. The above described weakness for CI’s performance is a challenge for future work in speech processing and CI’s processing strategies.

Keywords: cochlear implant, duration, spectrogram, vowel

Procedia PDF Downloads 257
7344 Importance of Human Resources Training in an Information Age

Authors: A. Serap Fırat

Abstract:

The aim of this study is to display conceptually the relationship and interaction between matter of human resources training and the information age. Fast development from industrial community to an information community has occurred and organizations have been seeking ways to overcome this change. Human resources policy and human capital with enhanced competence will have direct impact on work performance; therefore, this paper deals with the increased importance of human resource management due to the fact that it nurtures human capital. Researching and scanning are used as a method in this study. Both local and foreign literature and expert views are employed -as much as one could be- in the making of the theoretical framework of this study.

Keywords: human resources, information age, education, organization, occupation

Procedia PDF Downloads 363
7343 Parallel Multisplitting Methods for DAE’s

Authors: Ahmed Machmoum, Malika El Kyal

Abstract:

We consider iterative parallel multi-splitting method for differential algebraic equations. The main feature of the proposed idea is to use the asynchronous form. We prove that the multi-splitting technique can effectively accelerate the convergent performance of the iterative process. The main characteristic of an asynchronous mode is that the local algorithm not have to wait at predetermined messages to become available. We allow some processors to communicate more frequently than others, and we allow the communication delays tobe substantial and unpredictable. Note that synchronous algorithms in the computer science sense are particular cases of our formulation of asynchronous one.

Keywords: computer, multi-splitting methods, asynchronous mode, differential algebraic systems

Procedia PDF Downloads 541
7342 From Intuitive to Constructive Audit Risk Assessment: A Complementary Approach to CAATTs Adoption

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

The use of the audit risk model in auditing has faced limitations and difficulties, leading auditors to rely on a conceptual level of its application. The qualitative approach to assessing risks has resulted in different risk assessments, affecting the quality of audits and decision-making on the adoption of CAATTs. This study aims to investigate risk factors impacting the implementation of the audit risk model and propose a complementary risk-based instrument (KRIs) to form substance risk judgments and mitigate against heightened risk of material misstatement (RMM). The study addresses the question of how risk factors impact the implementation of the audit risk model, improve risk judgments, and aid in the adoption of CAATTs. The study uses a three-stage scale development procedure involving a pretest and subsequent study with two independent samples. The pretest involves an exploratory factor analysis, while the subsequent study employs confirmatory factor analysis for construct validation. Additionally, the authors test the ability of the KRIs to predict audit efforts needed to mitigate against heightened RMM. Data was collected through two independent samples involving 767 participants. The collected data was analyzed using exploratory factor analysis and confirmatory factor analysis to assess scale validity and construct validation. The suggested KRIs, comprising two risk components and seventeen risk items, are found to have high predictive power in determining audit efforts needed to reduce RMM. The study validates the suggested KRIs as an effective instrument for risk assessment and decision-making on the adoption of CAATTs. This study contributes to the existing literature by implementing a holistic approach to risk assessment and providing a quantitative expression of assessed risks. It bridges the gap between intuitive risk evaluation and the theoretical domain, clarifying the mechanism of risk assessments. It also helps improve the uniformity and quality of risk assessments, aiding audit standard-setters in issuing updated guidelines on CAATT adoption. A few limitations and recommendations for future research should be mentioned. First, the process of developing the scale was conducted in the Israeli auditing market, which follows the International Standards on Auditing (ISAs). Although ISAs are adopted in European countries, for greater generalization, future studies could focus on other countries that adopt additional or local auditing standards. Second, this study revealed risk factors that have a material impact on the assessed risk. However, there could be additional risk factors that influence the assessment of the RMM. Therefore, future research could investigate other risk segments, such as operational and financial risks, to bring a broader generalizability to our results. Third, although the sample size in this study fits acceptable scale development procedures and enables drawing conclusions from the body of research, future research may develop standardized measures based on larger samples to reduce the generation of equivocal results and suggest an extended risk model.

Keywords: audit risk model, audit efforts, CAATTs adoption, key risk indicators, sustainability

Procedia PDF Downloads 68
7341 Implications of Circular Economy on Users Data Privacy: A Case Study on Android Smartphones Second-Hand Market

Authors: Mariia Khramova, Sergio Martinez, Duc Nguyen

Abstract:

Modern electronic devices, particularly smartphones, are characterised by extremely high environmental footprint and short product lifecycle. Every year manufacturers release new models with even more superior performance, which pushes the customers towards new purchases. As a result, millions of devices are being accumulated in the urban mine. To tackle these challenges the concept of circular economy has been introduced to promote repair, reuse and recycle of electronics. In this case, electronic devices, that previously ended up in landfills or households, are getting the second life, therefore, reducing the demand for new raw materials. Smartphone reuse is gradually gaining wider adoption partly due to the price increase of flagship models, consequently, boosting circular economy implementation. However, along with reuse of communication device, circular economy approach needs to ensure the data of the previous user have not been 'reused' together with a device. This is especially important since modern smartphones are comparable with computers in terms of performance and amount of data stored. These data vary from pictures, videos, call logs to social security numbers, passport and credit card details, from personal information to corporate confidential data. To assess how well the data privacy requirements are followed on smartphones second-hand market, a sample of 100 Android smartphones has been purchased from IT Asset Disposition (ITAD) facilities responsible for data erasure and resell. Although devices should not have stored any user data by the time they leave ITAD, it has been possible to retrieve the data from 19% of the sample. Applied techniques varied from manual device inspection to sophisticated equipment and tools. These findings indicate significant barrier in implementation of circular economy and a limitation of smartphone reuse. Therefore, in order to motivate the users to donate or sell their old devices and make electronic use more sustainable, data privacy on second-hand smartphone market should be significantly improved. Presented research has been carried out in the framework of sustainablySMART project, which is part of Horizon 2020 EU Framework Programme for Research and Innovation.

Keywords: android, circular economy, data privacy, second-hand phones

Procedia PDF Downloads 124
7340 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process

Authors: A. Assad, T. Zayed

Abstract:

Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.

Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process

Procedia PDF Downloads 141
7339 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy

Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon

Abstract:

Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).

Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect

Procedia PDF Downloads 178
7338 Finite Sample Inferences for Weak Instrument Models

Authors: Gubhinder Kundhi, Paul Rilstone

Abstract:

It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. Finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.

Keywords: bootstrap, Instrumental Variable, Edgeworth expansions, Saddlepoint expansions

Procedia PDF Downloads 303
7337 Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation

Authors: Seynabou Toure, Oumar Diop, Kidiyo Kpalma, Amadou S. Maiga

Abstract:

Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature.

Keywords: classification, coastline, color, sea-land segmentation

Procedia PDF Downloads 237
7336 Design of Experiment for Optimizing Immunoassay Microarray Printing

Authors: Alex J. Summers, Jasmine P. Devadhasan, Douglas Montgomery, Brittany Fischer, Jian Gu, Frederic Zenhausern

Abstract:

Immunoassays have been utilized for several applications, including the detection of pathogens. Our laboratory is in the development of a tier 1 biothreat panel utilizing Vertical Flow Assay (VFA) technology for simultaneous detection of pathogens and toxins. One method of manufacturing VFA membranes is with non-contact piezoelectric dispensing, which provides advantages, such as low-volume and rapid dispensing without compromising the structural integrity of antibody or substrate. Challenges of this processinclude premature discontinuation of dispensing and misaligned spotting. Preliminary data revealed the Yp 11C7 mAb (11C7)reagent to exhibit a large angle of failure during printing which may have contributed to variable printing outputs. A Design of Experiment (DOE) was executed using this reagent to investigate the effects of hydrostatic pressure and reagent concentration on microarray printing outputs. A Nano-plotter 2.1 (GeSIM, Germany) was used for printing antibody reagents ontonitrocellulose membrane sheets in a clean room environment. A spotting plan was executed using Spot-Front-End software to dispense volumes of 11C7 reagent (20-50 droplets; 1.5-5 mg/mL) in a 6-test spot array at 50 target membrane locations. Hydrostatic pressure was controlled by raising the Pressure Compensation Vessel (PCV) above or lowering it below our current working level. It was hypothesized that raising or lowering the PCV 6 inches would be sufficient to cause either liquid accumulation at the tip or discontinue droplet formation. After aspirating 11C7 reagent, we tested this hypothesis under stroboscope.75% of the effective raised PCV height and of our hypothesized lowered PCV height were used. Humidity (55%) was maintained using an Airwin BO-CT1 humidifier. The number and quality of membranes was assessed after staining printed membranes with dye. The droplet angle of failure was recorded before and after printing to determine a “stroboscope score” for each run. The DOE set was analyzed using JMP software. Hydrostatic pressure and reagent concentration had a significant effect on the number of membranes output. As hydrostatic pressure was increased by raising the PCV 3.75 inches or decreased by lowering the PCV -4.5 inches, membrane output decreased. However, with the hydrostatic pressure closest to equilibrium, our current working level, membrane output, reached the 50-membrane target. As the reagent concentration increased from 1.5 to 5 mg/mL, the membrane output also increased. Reagent concentration likely effected the number of membrane output due to the associated dispensing volume needed to saturate the membranes. However, only hydrostatic pressure had a significant effect on stroboscope score, which could be due to discontinuation of dispensing, and thus the stroboscope check could not find a droplet to record. Our JMP predictive model had a high degree of agreement with our observed results. The JMP model predicted that dispensing the highest concentration of 11C7 at our current PCV working level would yield the highest number of quality membranes, which correlated with our results. Acknowledgements: This work was supported by the Chemical Biological Technologies Directorate (Contract # HDTRA1-16-C-0026) and the Advanced Technology International (Contract # MCDC-18-04-09-002) from the Department of Defense Chemical and Biological Defense program through the Defense Threat Reduction Agency (DTRA).

Keywords: immunoassay, microarray, design of experiment, piezoelectric dispensing

Procedia PDF Downloads 175
7335 The Impact of Leadership Style and Sense of Competence on the Performance of Post-Primary School Teachers in Oyo State, Nigeria

Authors: Babajide S. Adeokin, Oguntoyinbo O. Kazeem

Abstract:

The not so pleasing state of the nation's quality of education has been a major area of research. Many researchers have looked into various aspects of the educational system and organizational structure in relation to the quality of service delivery of the staff members. However, there is paucity of research in areas relating to the sense of competence and commitment in relation to leadership styles. Against this backdrop, this study investigated the impact of leadership style and sense of competence on the performance of post-primary school teachers in Oyo state Nigeria. Data were generated across public secondary schools in the city using survey design method. Ibadan as a metropolis has eleven local government areas contained in it. A systematic random sampling technique of the eleven local government areas in Ibadan was done and five local government areas were selected. The selected local government areas are Akinyele, Ibadan North, Ibadan North-East, Ibadan South and Ibadan South-West. Data were obtained from a range of two – three public secondary schools selected in each of the local government areas mentioned above. Also, these secondary schools are a representation of the variations in the constructs under consideration across the Ibadan metropolis. Categorically, all secondary school teachers in Ibadan were clustered into selected schools in those found across the five local government areas. In all, a total of 272 questionnaires were administered to public secondary school teachers, while 241 were returned. Findings revealed that transformational leadership style makes room for job commitment when compared with transactional and laissez-faire leadership styles. Teachers with a high sense of competence are more likely to demonstrate more commitment to their job than others with low sense of competence. We recommend that, it is important an assessment is made of the leadership styles employed by principals and school administrators. This guides administrators and principals in to having a clear, comprehensive knowledge of the style they currently adopt in the management of the staff and the school as a whole; and know where to begin the adjustment process from. Also to make an impact on student achievement, being attentive to teachers’ levels of commitment may be an important aspect of leadership for school principals.

Keywords: Ibadan, leadership style, sense of competence, teachers, public secondary schools

Procedia PDF Downloads 283
7334 Durability Analysis of a Knuckle Arm Using VPG System

Authors: Geun-Yeon Kim, S. P. Praveen Kumar, Kwon-Hee Lee

Abstract:

A steering knuckle arm is the component that connects the steering system and suspension system. The structural performances such as stiffness, strength, and durability are considered in its design process. The former study suggested the lightweight design of a knuckle arm considering the structural performances and using the metamodel-based optimization. The six shape design variables were defined, and the optimum design was calculated by applying the kriging interpolation method. The finite element method was utilized to predict the structural responses. The suggested knuckle was made of the aluminum Al6082, and its weight was reduced about 60% in comparison with the base steel knuckle, satisfying the design requirements. Then, we investigated its manufacturability by performing foraging analysis. The forging was done as hot process, and the product was made through two-step forging. As a final step of its developing process, the durability is investigated by using the flexible dynamic analysis software, LS-DYNA and the pre and post processor, eta/VPG. Generally, a car make does not provide all the information with the part manufacturer. Thus, the part manufacturer has a limit in predicting the durability performance with the unit of full car. The eta/VPG has the libraries of suspension, tire, and road, which are commonly used parts. That makes a full car modeling. First, the full car is modeled by referencing the following information; Overall Length: 3,595mm, Overall Width: 1,595mm, CVW (Curve Vehicle Weight): 910kg, Front Suspension: MacPherson Strut, Rear Suspension: Torsion Beam Axle, Tire: 235/65R17. Second, the road is selected as the cobblestone. The road condition of the cobblestone is almost 10 times more severe than that of usual paved road. Third, the dynamic finite element analysis using the LS-DYNA is performed to predict the durability performance of the suggested knuckle arm. The life of the suggested knuckle arm is calculated as 350,000km, which satisfies the design requirement set up by the part manufacturer. In this study, the overall design process of a knuckle arm is suggested, and it can be seen that the developed knuckle arm satisfies the design requirement of the durability with the unit of full car. The VPG analysis is successfully performed even though it does not an exact prediction since the full car model is very rough one. Thus, this approach can be used effectively when the detail to full car is not given.

Keywords: knuckle arm, structural optimization, Metamodel, forging, durability, VPG (Virtual Proving Ground)

Procedia PDF Downloads 413
7333 Testing a Dose-Response Model of Intergenerational Transmission of Family Violence

Authors: Katherine Maurer

Abstract:

Background and purpose: Violence that occurs within families is a global social problem. Children who are victims or witness to family violence are at risk for many negative effects both proximally and distally. One of the most disconcerting long-term effects occurs when child victims become adult perpetrators: the intergenerational transmission of family violence (ITFV). Early identification of those children most at risk for ITFV is needed to inform interventions to prevent future family violence perpetration and victimization. Only about 25-30% of child family violence victims become perpetrators of adult family violence (either child abuse, partner abuse, or both). Prior research has primarily been conducted using dichotomous measures of exposure (yes; no) to predict ITFV, given the low incidence rate in community samples. It is often assumed that exposure to greater amounts of violence predicts greater risk of ITFV. However, no previous longitudinal study with a community sample has tested a dose-response model of exposure to physical child abuse and parental physical intimate partner violence (IPV) using count data of frequency and severity of violence to predict adult ITFV. The current study used advanced statistical methods to test if increased childhood exposure would predict greater risk of ITFV. Methods: The study utilized 3 panels of prospective data from a cohort of 15 year olds (N=338) from the Project on Human Development in Chicago Neighborhoods longitudinal study. The data were comprised of a stratified probability sample of seven ethnic/racial categories and three socio-economic status levels. Structural equation modeling was employed to test a hurdle regression model of dose-response to predict ITFV. A version of the Conflict Tactics Scale was used to measure physical violence victimization, witnessing parental IPV and young adult IPV perpetration and victimization. Results: Consistent with previous findings, past 12 months incidence rates severity and frequency of interpersonal violence were highly skewed. While rates of parental and young adult IPV were about 40%, an unusually high rate of physical child abuse (57%) was reported. The vast majority of a number of acts of violence, whether minor or severe, were in the 1-3 range in the past 12 months. Reported frequencies of more than 5 times in the past year were rare, with less than 10% of those reporting more than six acts of minor or severe physical violence. As expected, minor acts of violence were much more common than acts of severe violence. Overall, regression analyses were not significant for the dose-response model of ITFV. Conclusions and implications: The results of the dose-response model were not significant due to a lack of power in the final sample (N=338). Nonetheless, the value of the approach was confirmed for the future research given the bi-modal nature of the distributions which suggest that in the context of both child physical abuse and physical IPV, there are at least two classes when frequency of acts is considered. Taking frequency into account in predictive models may help to better understand the relationship of exposure to ITFV outcomes. Further testing using hurdle regression models is suggested.

Keywords: intergenerational transmission of family violence, physical child abuse, intimate partner violence, structural equation modeling

Procedia PDF Downloads 235
7332 On the Blocked-off Finite-Volume Radiation Solutions in a Two-Dimensional Enclosure

Authors: Gyo Woo Lee, Man Young Kim

Abstract:

The blocked-off formulations for the analysis of radiative heat transfer are formulated and examined in order to find the solutions in a two-dimensional complex enclosure. The final discretization equations using the step scheme for spatial differencing practice are proposed with the additional source term to incorporate the blocked-off procedure. After introducing the implementation for inactive region into the general discretization equation, three different problems are examined to find the performance of the solution methods.

Keywords: radiative heat transfer, Finite Volume Method (FVM), blocked-off solution procedure, body-fitted coordinate

Procedia PDF Downloads 290
7331 Tip-Enhanced Raman Spectroscopy with Plasmonic Lens Focused Longitudinal Electric Field Excitation

Authors: Mingqian Zhang

Abstract:

Tip-enhanced Raman spectroscopy (TERS) is a scanning probe technique for individual objects and structured surfaces investigation that provides a wealth of enhanced spectral information with nanoscale spatial resolution and high detection sensitivity. It has become a powerful and promising chemical and physical information detection method in the nanometer scale. The TERS technique uses a sharp metallic tip regulated in the near-field of a sample surface, which is illuminated with a certain incident beam meeting the excitation conditions of the wave-vector matching. The local electric field, and, consequently, the Raman scattering, from the sample in the vicinity of the tip apex are both greatly tip-enhanced owning to the excitation of localized surface plasmons and the lightning-rod effect. Typically, a TERS setup is composed of a scanning probe microscope, excitation and collection optical configurations, and a Raman spectroscope. In the illumination configuration, an objective lens or a parabolic mirror is always used as the most important component, in order to focus the incident beam on the tip apex for excitation. In this research, a novel TERS setup was built up by introducing a plasmonic lens to the excitation optics as a focusing device. A plasmonic lens with symmetry breaking semi-annular slits corrugated on gold film was designed for the purpose of generating concentrated sub-wavelength light spots with strong longitudinal electric field. Compared to conventional far-field optical components, the designed plasmonic lens not only focuses an incident beam to a sub-wavelength light spot, but also realizes a strong z-component that dominants the electric field illumination, which is ideal for the excitation of tip-enhancement. Therefore, using a PL in the illumination configuration of TERS contributes to improve the detection sensitivity by both reducing the far-field background and effectively exciting the localized electric field enhancement. The FDTD method was employed to investigate the optical near-field distribution resulting from the light-nanostructure interaction. And the optical field distribution was characterized using an scattering-type scanning near-field optical microscope to demonstrate the focusing performance of the lens. The experimental result is in agreement with the theoretically calculated one. It verifies the focusing performance of the plasmonic lens. The optical field distribution shows a bright elliptic spot in the lens center and several arc-like side-lobes on both sides. After the focusing performance was experimentally verified, the designed plasmonic lens was used as a focusing component in the excitation configuration of TERS setup to concentrate incident energy and generate a longitudinal optical field. A collimated linearly polarized laser beam, with along x-axis polarization, was incident from the bottom glass side on the plasmonic lens. The incident light focused by the plasmonic lens interacted with the silver-coated tip apex and enhanced the Raman signal of the sample locally. The scattered Raman signal was gathered by a parabolic mirror and detected with a Raman spectroscopy. Then, the plasmonic lens based setup was employed to investigate carbon nanotubes and TERS experiment was performed. Experimental results indicate that the Raman signal is considerably enhanced which proves that the novel TERS configuration is feasible and promising.

Keywords: longitudinal electric field, plasmonics, raman spectroscopy, tip-enhancement

Procedia PDF Downloads 363
7330 Growth Performance,haematological And Serum Biochemistry Of Broilers Fed Graded Levels Of Cocoyam (Xanthosoma Sagittifolium)

Authors: Urom Scholastica Mgbo, Ifeanyichukwu, Vivian, Anaba, Uchemadu Martins, Arusiaba, Nelson Chijioke

Abstract:

The study was investigated to determine the growth performance , haematological and serum biochemistry of broiler fed graded levels of cocoyam (Xanthosoma sagittifolium). One hundred and twenty (120) day old broiler chicks of Anak strain were used for the study. The birds were randomly divided into 4 treatment groups of 30 birds per group, and each group was further divided into 3 replicates of 10 birds per replicate in group. Cooked cocoyam was used to formulate diets at inclusion levels of 0.00% for T1 (control), while T2, T3 and T4 contained 10.00%, 20.00% and 30.00% inclusion of cocoyam in partial replacement of maize in a Completely Randomized Design (CRD). At the end of the research, the haematological indices of broiler showed that packed cell volume (PCV) of birds fed diets 1(42.26%) and 3 (42.42%) were significantly (p<0.05) higher than birds fed diets 2 (39.72%) and 4 (38.78%).The Haemoglobin (Hb) of birds fed diets 3 (12.58g/dl) and 4 (12.26g/dl) were significantly (p<0.05) higher than birds fed diets 1 (11.60g/dl) and 2 (11.42g/dl). The values of the white blood cell (WBC) of the broiler chickens placed on cocoyam diet increased significantly (P<0.05) compared with the values obtained in the control (T1) . The serum protein value for birds fed diet I (5.45g/dl) were statistically (P>0.05) similar to those fed diets 2 (5.10g/dl) and 3 (5.38g/dl) but differ significantly (P<0.05) from diet 4 (4.97g/dl) which had the least protein value. Final weight of the birds showed that diet 4 (2370.85g) had the highest (P<0.05) value which was followed closely by diet 3 (2225.55g), while birds fed diets 1 (2165.70g) and diet 2 (2145.00g) recorded the least values Similar pattern was observed in the weight gain of the birds. Birds fed diet 4 (2270.30g) had higher (P<0.05) value, followed by birds on diet 3 (2125.45g), while birds fed diet 1 (2065.15g) and 2 (2044.90g) had the least values.. This study showed that birds fed diet 3 (50.60g) and diet 4 (54.05g) gave significantly (P<0.05) higher weight than the control diet (49.17g). There was significant (P<0.05) difference among the treatments for feed conversion ratio (FCR), were birds fed diet 4 (1.74) performed better, having the least feed conversion ratio. Economics of broiler chickens showed that Cost/kg of feed favored diet 4 (₦158.65) followed by diets 3 (₦165.95), 2 (₦178.52) and control diet 1 (₦197.14). From the result, the higher weight recorded in T4 4 showed that cocoyam meal can successfully replace maize up to 30% in the diet of broiler chickens. The low cost recorded in cocoyam based diets showed that the diets were more economical and beneficial compared to control diet 1. Therefore, feeding diet 4 (30%) cocoyam meal as replacement of maize in broiler chickens is recommended.

Keywords: cocoyam, growth, heamatology, serum biochemistry

Procedia PDF Downloads 105
7329 CO2 Methanation over Ru-Ni/CeO2 Catalysts

Authors: Nathalie Elia, Samer Aouad, Jane Estephane, Christophe Poupin, Bilal Nsouli, Edmond Abi Aad

Abstract:

Carbon dioxide is one of the main contributors to greenhouse effect and hence to climate change. As a result, the methanation reaction CO2(g) + 4H2(g) →CH4(g) + 2H2O (ΔH°298 = -165 kJ/mol), also known as Sabatier reaction, has received great interest as a process for the valorization of the greenhouse gas CO2 into methane which is a hydrogen-carrier gas. The methanation of CO2 is an exothermic reaction favored at low temperature and high pressure. However, this reaction requires a high energy input to activate the very stable CO2 molecule, and exhibits serious kinetic limitations. Consequently, the development of active and stable catalysts is essential to overcome these difficulties. Catalytic methanation of CO2 has been studied using catalysts containing Rh, Pd, Ru, Co and Ni on various supports. Among them, the Ni-based catalysts have been extensively investigated under various conditions for their comparable methanation activity with highly improved cost-efficiency. The addition of promoters are common strategies to increase the performance and stability of Ni catalysts. In this work, a small amount of Ru was used as a promoter for Ni catalysts supported on ceria and tested in the CO2 methanation reaction. The nickel loading was 5 wt. % and ruthenium loading is 0.5wt. %. The catalysts were prepared by successive impregnation method using Ni(NO3)2.6H2O and Ru(NO)(NO3)3 as precursors. The calcined support was impregnated with Ni(NO3)2.6H2O, dried, calcined at 600°C for 4h, and afterward, was impregnated with Ru(NO)(NO3)3. The resulting solid was dried and calcined at 600°C for 4 h. Supported monometallic catalysts were prepared likewise. The prepared solids Ru(0.5%)/CeO2, Ni(5%)/CeO2 and Ru(0.5%)-Ni(5%)/CeO2 were then reduced prior to the catalytic test under a flow of 50% H2/Ar (50 ml/min) for 4h at 500°C. Finally, their catalytic performances were evaluated in the CO2 methanation reaction, in the temperature range of 100–350°C by using a gaseous mixture of CO2 (10%) and H2 (40%) in Ar balanced at a total flow rate of 100 mL/min. The effect of pressure on the CO2 methanation was studied by varying the pressure between 1 and 10 bar. The various catalysts showed negligible CO2 conversion at temperatures lower than 250°C. The conversion of CO2 increases with increasing reaction temperature. The addition of Ru as promoter to Ni/CeO2 improved the CO2 methanation. It was shown that the CO2 conversion increases from 15 to 70% at 350°C and 1 bar. The effect of pressure on CO2 conversion was also studied. Increasing the pressure from 1 to 5 bar increases the CO2 conversion from 70% to 87%, while increasing the pressure from 5 to 10 bar increases the CO2 conversion from 87% to 91%. Ru–Ni catalysts showed excellent catalytic performance in the methanation of carbon dioxide with respect to Ni catalysts. Therefore the addition of Ru onto Ni catalysts improved remarkably the catalytic activity of Ni catalysts. It was also found that the pressure plays an important role in improving the CO2 methanation.

Keywords: CO2, methanation, nickel, ruthenium

Procedia PDF Downloads 213
7328 Literature Review and Evaluation of the Internal Marketing Theory

Authors: Hsiao Hsun Yuan

Abstract:

Internal marketing was proposed in 1970s. The theory of the concept has continually changed over the past forty years. This study discussed the following themes: the definition and implication of internal marketing, the progress of its development, and the evolution of its theoretical model. Moreover, the study systematically organized the strategies of the internal marketing theory adopted on enterprise and how they were put into practice. It also compared the empirical studies focusing on how the existent theories influenced the important variables of internal marketing. The results of this study are expected to serve as references for future exploration of the boundary and studies aiming at how internal marketing is applied to different types of enterprises.

Keywords: corporate responsibility, employee organizational performance, internal marketing, internal customer

Procedia PDF Downloads 346
7327 A Review on Application of Waste Tire in Concrete

Authors: M. A. Yazdi, J. Yang, L. Yihui, H. Su

Abstract:

The application of recycle waste tires into civil engineering practices, namely asphalt paving mixtures and cementbased materials has been gaining ground across the world. This review summarizes and compares the recent achievements in the area of plain rubberized concrete (PRC), in details. Different treatment methods have been discussed to improve the performance of rubberized Portland cement concrete. The review also includes the effects of size and amount of tire rubbers on mechanical and durability properties of PRC. The microstructure behaviour of the rubberized concrete was detailed.

Keywords: waste rubber aggregates, microstructure, treatment methods, size and content effects

Procedia PDF Downloads 319
7326 Morphology and Electrical Conductivity of a Non-Symmetrical NiO-SDC/SDC Anode through a Microwave-Assisted Route

Authors: Mohadeseh Seyednezhad, Armin Rajabi, Andanastui Muchtar, Mahendra Rao Somalu

Abstract:

This work investigates the electrical properties of NiO-SDC/SDC anode sintered at about 1200 ○C for 1h through a relatively new approach, namely the microwave method. Nano powders Sm0.2Ce0.8O1.9 (SDC) and NiO were mixed by using a high-energy ball-mill and subsequent co-pressed at three different compaction pressures 200, 300 and 400 MPa. The novelty of this study consists in the effect of compaction pressure on the electrochemical performance of Ni-SDC/SDC anode, with no binder used between layers. The electrical behavior of the prepared anode has been studied by electrochemical impedance spectra (EIS) in controlled atmospheres, operating at high temperatures (600-800 °C).

Keywords: sintering, fuel cell, electrical conductivity, nanostructures, impedance spectroscopy, ceramics

Procedia PDF Downloads 462
7325 Implementing Effective Strategies to Improve Teaching and Learning in Higher Education: Balancing the Engagement Acts between Lecturers And Students

Authors: Jeffrey Siphiwe Mkhize

Abstract:

Twelve years of schooling for most South African children, particularly those children from disadvantaged past, are confronted with numerous and diverse challenges. These challenges range from infrastructural limitations, language of teaching, poor resources and varying family backgrounds. Likewise, schools are categorized to signify schools’ geographic location, poverty lines, societal class and type of students that the school are likely to enroll. Such categorization perpetuates particular lines of identities that are indirectly reinforced by the same system that seeks to redress. South African universities prefer point systems to determine students’ suitability to gain access to their programmes. Once students are admitted based on the qualifying points there is an assumed equity in the manner in which they receive tuition. They are assumed as equal; noting the widened access to South African universities as means to redress past inequalities. Given the challenges, inequalities, it is necessary to view higher education as a site for knowledge construction that is accessible to all students. Epistemological access is key to all students irrespective of their socio-economic status. This paper seeks to contribute to the discourse of student engagement using lecturer-student relationship as a lens to understand this phenomenon. Data were generated using South African Survey of Student Engagement, focus group interviews, semi-structured one-on-one-interviews as well as document analysis. The focus was on students registered for the first year of a Bachelor of Education degree as well as lecturers that teach high risk modules in this qualification at the same level. The findings suggest that lecturers are challenged by overcrowded classrooms and over-enrolled modules; this challenge hampers their good intentions to become more efficient and innovative in their teaching. Students lack confidence in approaching lecturers for assistance. Collaborative learning has stronger results and students believe in self-support to deal with their challenges based on their individual strengths. Collaborative learning is key to student academic performance.

Keywords: collaborative learning, consultations, student engagement, student performance

Procedia PDF Downloads 102
7324 Predicting Career Adaptability and Optimism among University Students in Turkey: The Role of Personal Growth Initiative and Socio-Demographic Variables

Authors: Yagmur Soylu, Emir Ozeren, Erol Esen, Digdem M. Siyez, Ozlem Belkis, Ezgi Burc, Gülce Demirgurz

Abstract:

The aim of the study is to determine the predictive power of personal growth initiative, socio-demographic variables (such as sex, grade, and working condition) on career adaptability and optimism of bachelor students in Dokuz Eylul University in Turkey. According to career construction theory, career adaptability is viewed as a psychosocial construct, which refers to an individual’s resources for dealing with current and expected tasks, transitions and traumas in their occupational roles. Career optimism is defined as positive results for future career development of individuals in the expectation that it will achieve or to put the emphasis on the positive aspects of the event and feel comfortable about the career planning process. Personal Growth Initiative (PGI) is defined as being proactive about one’s personal development. Additionally, personal growth is defined as the active and intentional engagement in the process of personal. A study conducted on college students revealed that individuals with high self-development orientation make more effort to discover the requirements of the profession and workspaces than individuals with low levels of personal development orientation. University life is a period that social relations and the importance of academic activities are increased, the students make efforts to progress through their career paths and it is also an environment that offers opportunities to students for their self-realization. For these reasons, personal growth initiative is potentially an important variable which has a key role for an individual during the transition phase from university to the working life. Based on the review of the literature, it is expected that individual’s personal growth initiative, sex, grade, and working condition would significantly predict one’s career adaptability. In the relevant literature, it can be seen that there are relatively few studies available on the career adaptability and optimism of university students. Most of the existing studies have been carried out with limited respondents. In this study, the authors aim to conduct a comprehensive research with a large representative sample of bachelor students in Dokuz Eylul University, Izmir, Turkey. By now, personal growth initiative and career development constructs have been predominantly discussed in western contexts where individualistic tendencies are likely to be seen. Thus, the examination of the same relationship within the context of Turkey where collectivistic cultural characteristics can be more observed is expected to offer valuable insights and provide an important contribution to the literature. The participants in this study were comprised of 1500 undergraduate students being included from thirteen faculties in Dokuz Eylul University. Stratified and random sampling methods were adopted for the selection of the participants. The Personal Growth Initiative Scale-II and Career Futures Inventory were used as the major measurement tools. In data analysis stage, several statistical analysis concerning the regression analysis, one-way ANOVA and t-test will be conducted to reveal the relationships of the constructs under investigation. At the end of this project, we will be able to determine the level of career adaptability and optimism of university students at varying degrees so that a fertile ground is likely to be created to carry out several intervention techniques to make a contribution to an emergence of a healthier and more productive youth generation in psycho-social sense.

Keywords: career optimism, career adaptability, personal growth initiative, university students

Procedia PDF Downloads 410
7323 A Peg Board with Photo-Reflectors to Detect Peg Insertion and Pull-Out Moments

Authors: Hiroshi Kinoshita, Yasuto Nakanishi, Ryuhei Okuno, Toshio Higashi

Abstract:

Various kinds of pegboards have been developed and used widely in research and clinics of rehabilitation for evaluation and training of patient’s hand function. A common measure in these peg boards is a total time of performance execution assessed by a tester’s stopwatch. Introduction of electrical and automatic measurement technology to the apparatus, on the other hand, has been delayed. The present work introduces the development of a pegboard with an electric sensor to detect moments of individual peg’s insertion and removal. The work also gives fundamental data obtained from a group of healthy young individuals who performed peg transfer tasks using the pegboard developed. Through trails and errors in pilot tests, two 10-hole peg-board boxes installed with a small photo-reflector and a DC amplifier at the bottom of each hole were designed and built by the present authors. The amplified electric analogue signals from the 20 reflectors were automatically digitized at 500 Hz per channel, and stored in a PC. The boxes were set on a test table at different distances (25, 50, 75, and 125 mm) in parallel to examine the effect of hole-to-hole distance. Fifty healthy young volunteers (25 in each gender) as subjects of the study performed successive fast 80 time peg transfers at each distance using their dominant and non-dominant hands. The data gathered showed a clear-cut light interruption/continuation moment by the pegs, allowing accurately (no tester’s error involved) and precisely (an order of milliseconds) to determine the pull out and insertion times of each peg. This further permitted computation of individual peg movement duration (PMD: from peg-lift-off to insertion) apart from hand reaching duration (HRD: from peg insertion to lift-off). An accidental drop of a peg led to an exceptionally long ( < mean + 3 SD) PMD, which was readily detected from an examination of data distribution. The PMD data were commonly right-skewed, suggesting that the median can be a better estimate of individual PMD than the mean. Repeated measures ANOVA using the median values revealed significant hole-to-hole distance, and hand dominance effects, suggesting that these need to be fixed in the accurate evaluation of PMD. The gender effect was non-significant. Performance consistency was also evaluated by the use of quartile variation coefficient values, which revealed no gender, hole-to-hole, and hand dominance effects. The measurement reliability was further examined using interclass correlation obtained from 14 subjects who performed the 25 and 125 mm hole distance tasks at two 7-10 days separate test sessions. Inter-class correlation values between the two tests showed fair reliability for PMD (0.65-0.75), and for HRD (0.77-0.94). We concluded that a sensor peg board developed in the present study could provide accurate (excluding tester’s errors), and precise (at a millisecond rate) time information of peg movement separated from that used for hand movement. It could also easily detect and automatically exclude erroneous execution data from his/her standard data. These would lead to a better evaluation of hand dexterity function compared to the widely used conventional used peg boards.

Keywords: hand, dexterity test, peg movement time, performance consistency

Procedia PDF Downloads 129
7322 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 323
7321 Stochastic Repair and Replacement with a Single Repair Channel

Authors: Mohammed A. Hajeeh

Abstract:

This paper examines the behavior of a system, which upon failure is either replaced with certain probability p or imperfectly repaired with probability q. The system is analyzed using Kolmogorov's forward equations method; the analytical expression for the steady state availability is derived as an indicator of the system’s performance. It is found that the analysis becomes more complex as the number of imperfect repairs increases. It is also observed that the availability increases as the number of states and replacement probability increases. Using such an approach in more complex configurations and in dynamic systems is cumbersome; therefore, it is advisable to resort to simulation or heuristics. In this paper, an example is provided for demonstration.

Keywords: repairable models, imperfect, availability, exponential distribution

Procedia PDF Downloads 283
7320 Assessment of the Thermal Performance of a Solar Heating System on an Agricultural Greenhouse Microclimate

Authors: Nora Arbaoui, Rachid Tadili

Abstract:

The substantial increase of areas cultivated under glasshouses compels the use of other natural heating and cooling procedures to make a profit as well as avoid both exorbitant fuel consumption and CO₂ emissions. This experimental study is designed to examine the functioning of a solar heating system that will increase positive consequences in terms of both quantity and quality while successfully enhancing greenhouse microclimate during wintertime. Those configurations have been tested in a miniaturized greenhouse simply after having optimized the operating parameters. These were noteworthy results when compared to an unheated witness greenhouse.

Keywords: solar system, agricultural greenhouse, heating, cooling, storage, drying

Procedia PDF Downloads 11
7319 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit

Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira

Abstract:

Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.

Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing

Procedia PDF Downloads 139
7318 Integration of Building Information Modeling Framework for 4D Constructability Review and Clash Detection Management of a Sewage Treatment Plant

Authors: Malla Vijayeta, Y. Vijaya Kumar, N. Ramakrishna Raju, K. Satyanarayana

Abstract:

Global AEC (architecture, engineering, and construction) industry has been coined as one of the most resistive domains in embracing technology. Although this digital era has been inundated with software tools like CAD, STADD, CANDY, Microsoft Project, Primavera etc. the key stakeholders have been working in siloes and processes remain fragmented. Unlike the yesteryears’ simpler project delivery methods, the current projects are of fast-track, complex, risky, multidisciplinary, stakeholder’s influential, statutorily regulative etc. pose extensive bottlenecks in preventing timely completion of projects. At this juncture, a paradigm shift surfaced in construction industry, and Building Information Modeling, aka BIM, has been a panacea to bolster the multidisciplinary teams’ cooperative and collaborative work leading to productive, sustainable and leaner project outcome. Building information modeling has been integrative, stakeholder engaging and centralized approach in providing a common platform of communication. A common misconception that BIM can be used for building/high rise projects in Indian Construction Industry, while this paper discusses of the implementation of BIM processes/methodologies in water and waste water industry. It elucidates about BIM 4D planning and constructability reviews of a Sewage Treatment Plant in India. Conventional construction planning and logistics management involves a blend of experience coupled with imagination. Even though the excerpts or judgments or lessons learnt gained from veterans might be predictive and helpful, but the uncertainty factor persists. This paper shall delve about the case study of real time implementation of BIM 4D planning protocols for one of the Sewage Treatment Plant of Dravyavati River Rejuvenation Project in India and develops a Time Liner to identify logistics planning and clash detection. With this BIM processes, we shall find that there will be significant reduction of duplication of tasks and reworks. Also another benefit achieved will be better visualization and workarounds during conception stage and enables for early involvement of the stakeholders in the Project Life cycle of Sewage Treatment Plant construction. Moreover, we have also taken an opinion poll of the benefits accrued utilizing BIM processes versus traditional paper based communication like 2D and 3D CAD tools. Thus this paper concludes with BIM framework for Sewage Treatment Plant construction which will achieve optimal construction co-ordination advantages like 4D construction sequencing, interference checking, clash detection checking and resolutions by primary engagement of all key stakeholders thereby identifying potential risks and subsequent creation of risk response strategies. However, certain hiccups like hesitancy in adoption of BIM technology by naïve users and availability of proficient BIM trainers in India poses a phenomenal impediment. Hence the nurture of BIM processes from conception, construction and till commissioning, operation and maintenance along with deconstruction of a project’s life cycle is highly essential for Indian Construction Industry in this digital era.

Keywords: integrated BIM workflow, 4D planning with BIM, building information modeling, clash detection and visualization, constructability reviews, project life cycle

Procedia PDF Downloads 113