Search results for: suel utilization factor
6646 Correction Factor to Enhance the Non-Standard Hammer Effect Used in Standard Penetration Test
Authors: Khaled R. Khater
Abstract:
The weight of the SPT hammer is standard (0.623kN). The locally manufacturer drilling rigs use hammers, sometimes deviating off the standard weight. This affects the field measured blow counts (Nf) consequentially, affecting most of correlations previously obtained, as they were obtained based on standard hammer weight. The literature presents energy corrections factor (η2) to be applied to the SPT total input energy. This research investigates the effect of the hammer weight variation, as a single parameter, on the field measured blow counts (Nf). The outcome is a correction factor (ηk), equation, and correction chart. They are recommended to adjust back the measured misleading (Nf) to the standard one as if the standard hammer is used. This correction is very important to be done in such cases where a non-standard hammer is being used because the bore logs in any geotechnical report should contain true and representative values (Nf), let alone the long records of correlations, already in hand. The study here-in is achieved by using laboratory physical model to simulate the SPT dripping hammer mechanism. It is designed to allow different hammer weights to be used. Also, it is manufactured to avoid and eliminate the energy loss sources. This produces a transmitted efficiency up to 100%.Keywords: correction factors, hammer weight, physical model, standard penetration test
Procedia PDF Downloads 3876645 The Mechanisms of Peer-Effects in Education: A Frame-Factor Analysis of Instruction
Authors: Pontus Backstrom
Abstract:
In the educational literature on peer effects, attention has been brought to the fact that the mechanisms creating peer effects are still to a large extent hidden in obscurity. The hypothesis in this study is that the Frame Factor Theory can be used to explain these mechanisms. At heart of the theory is the concept of “time needed” for students to learn a certain curricula unit. The relations between class-aggregated time needed and the actual time available, steers and hinders the actions possible for the teacher. Further, the theory predicts that the timing and pacing of the teachers’ instruction is governed by a “criterion steering group” (CSG), namely the pupils in the 10th-25th percentile of the aptitude distribution in class. The class composition hereby set the possibilities and limitations for instruction, creating peer effects on individual outcomes. To test if the theory can be applied to the issue of peer effects, the study employs multilevel structural equation modelling (M-SEM) on Swedish TIMSS 2015-data (Trends in International Mathematics and Science Study; students N=4090, teachers N=200). Using confirmatory factor analysis (CFA) in the SEM-framework in MPLUS, latent variables are specified according to the theory, such as “limitations of instruction” from TIMSS survey items. The results indicate a good model fit to data of the measurement model. Research is still in progress, but preliminary results from initial M-SEM-models verify a strong relation between the mean level of the CSG and the latent variable of limitations on instruction, a variable which in turn have a great impact on individual students’ test results. Further analysis is required, but so far the analysis indicates a confirmation of the predictions derived from the frame factor theory and reveals that one of the important mechanisms creating peer effects in student outcomes is the effect the class composition has upon the teachers’ instruction in class.Keywords: compositional effects, frame factor theory, peer effects, structural equation modelling
Procedia PDF Downloads 1356644 Chipless RFID Capacity Enhancement Using the E-pulse Technique
Authors: Haythem H. Abdullah, Hesham Elkady
Abstract:
With the fast increase in radio frequency identification (RFID) applications such as medical recording, library management, etc., the limitation of active tags stems from its need to external batteries as well as passive or active chips. The chipless RFID tag reduces the cost to a large extent but at the expense of utilizing the spectrum. The reduction of the cost of chipless RFID is due to the absence of the chip itself. The identification is done by utilizing the spectrum in such a way that the frequency response of the tags consists of some resonance frequencies that represent the bits. The system capacity is decided by the number of resonators within the pre-specified band. It is important to find a solution to enhance the spectrum utilization when using chipless RFID. Target identification is a process that results in a decision that a specific target is present or not. Several target identification schemes are present, but one of the most successful techniques in radar target identification in the oscillatory region is the extinction pulse technique (E-Pulse). The E-Pulse technique is used to identify targets via its characteristics (natural) modes. By introducing an innovative solution for chipless RFID reader and tag designs, the spectrum utilization goes to the optimum case. In this paper, a novel capacity enhancement scheme based on the E-pulse technique is introduced to improve the performance of the chipless RFID system.Keywords: chipless RFID, E-pulse, natural modes, resonators
Procedia PDF Downloads 836643 Performance Comparison of Thread-Based and Event-Based Web Servers
Authors: Aikaterini Kentroti, Theodore H. Kaskalis
Abstract:
Today, web servers are expected to serve thousands of client requests concurrently within stringent response time limits. In this paper, we evaluate experimentally and compare the performance as well as the resource utilization of popular web servers, which differ in their approach to handle concurrency. More specifically, Central Processing Unit (CPU)- and I/O intensive tests were conducted against the thread-based Apache and Go as well as the event-based Nginx and Node.js under increasing concurrent load. The tests involved concurrent users requesting a term of the Fibonacci sequence (the 10th, 20th, 30th) and the content of a table from the database. The results show that Go achieved the best performance in all benchmark tests. For example, Go reached two times higher throughput than Node.js and five times higher than Apache and Nginx in the 20th Fibonacci term test. In addition, Go had the smallest memory footprint and demonstrated the most efficient resource utilization, in terms of CPU usage. Instead, Node.js had by far the largest memory footprint, consuming up to 90% more memory than Nginx and Apache. Regarding the performance of Apache and Nginx, our findings indicate that Hypertext Preprocessor (PHP) becomes a bottleneck when the servers are requested to respond by performing CPU-intensive tasks under increasing concurrent load.Keywords: apache, Go, Nginx, node.js, web server benchmarking
Procedia PDF Downloads 996642 Generalized Approach to Linear Data Transformation
Authors: Abhijith Asok
Abstract:
This paper presents a generalized approach for the simple linear data transformation, Y=bX, through an integration of multidimensional coordinate geometry, vector space theory and polygonal geometry. The scaling is performed by adding an additional ’Dummy Dimension’ to the n-dimensional data, which helps plot two dimensional component-wise straight lines on pairs of dimensions. The end result is a set of scaled extensions of observations in any of the 2n spatial divisions, where n is the total number of applicable dimensions/dataset variables, created by shifting the n-dimensional plane along the ’Dummy Axis’. The derived scaling factor was found to be dependent on the coordinates of the common point of origin for diverging straight lines and the plane of extension, chosen on and perpendicular to the ’Dummy Axis’, respectively. This result indicates the geometrical interpretation of a linear data transformation and hence, opportunities for a more informed choice of the factor ’b’, based on a better choice of these coordinate values. The paper follows on to identify the effect of this transformation on certain popular distance metrics, wherein for many, the distance metric retained the same scaling factor as that of the features.Keywords: data transformation, dummy dimension, linear transformation, scaling
Procedia PDF Downloads 2996641 Interleukin-6 and Tumor Necrosis Factor-α Levels in Tear Film of Keratoconus Patients
Authors: Mazdak Ganjalikhani, Mohamad Namgar, Alireza Peyman
Abstract:
Introduction: The present study was carried out to measure the levels of inflammatory markers Interleukin-6 (IL-6) and tumor necrosis factor-alpha (TNF-α) in tear of keratoconus patients and investigate their relationship with the severity of keratoconus. Materials and Methods: This study was performed on 81 patients with keratoconus (cases) and 85 healthy individuals (controls) who were selected through the convenience sampling method from patients visiting the Feiz Ophthalmology Hospital affiliated with the Isfahan University of Medical Sciences. Tear levels of IL-6 and TNF-α were measured after collecting the patient's tears from the lower eyelid through the Schirmer I method using a filter paper (Schirmer tear test strip) without anesthesia. Findings: The mean levels of IL-6 and TNF-α were 26.77±8.16 and 34.58±9.82 in the control group and 103.22±51.94, and 183.76±54.61 in the case group, respectively, indicating a significant difference between two groups (p<0.05). In addition, there was a significant relationship between the severity of the keratoconus and the mean levels of TNF-α and IL-6 in the case group (p<0.05). Conclusion: According to the results, the mean levels of IL-6 and TNF-α were higher in keratoconus cases than in the controls, and the disease severity was significantly associated with the levels of inflammatory markers IL-6 and TNF-α.Keywords: keratoonus, cataract, tumor necrotic factor, interleukin 6
Procedia PDF Downloads 106640 An Investigation of Vegetable Oils as Potential Insulating Liquid
Authors: Celal Kocatepe, Eyup Taslak, Celal Fadil Kumru, Oktay Arikan
Abstract:
While choosing insulating oil, characteristic features such as thermal cooling, endurance, efficiency and being environment-friendly should be considered. Mineral oils are referred as petroleum-based oil. In this study, vegetable oils investigated as an alternative insulating liquid to mineral oil. Dissipation factor, breakdown voltage, relative dielectric constant and resistivity changes with the frequency and voltage of mineral, rapeseed and nut oils were measured. Experimental studies were performed according to ASTM D924 and IEC 60156 standards.Keywords: breakdown voltage, dielectric dissipation factor, mineral oil, vegetable oils
Procedia PDF Downloads 6966639 Effect of Particle Aspect Ratio and Shape Factor on Air Flow inside Pulmonary Region
Authors: Pratibha, Jyoti Kori
Abstract:
Particles in industry, harvesting, coal mines, etc. may not necessarily be spherical in shape. In general, it is difficult to find perfectly spherical particle. The prediction of movement and deposition of non spherical particle in distinct airway generation is much more difficult as compared to spherical particles. Moreover, there is extensive inflexibility in deposition between ducts of a particular generation and inside every alveolar duct since particle concentrations can be much bigger than the mean acinar concentration. Consequently, a large number of particles fail to be exhaled during expiration. This study presents a mathematical model for the movement and deposition of those non-spherical particles by using particle aspect ratio and shape factor. We analyse the pulsatile behavior underneath sinusoidal wall oscillation due to periodic breathing condition through a non-Darcian porous medium or inside pulmonary region. Since the fluid is viscous and Newtonian, the generalized Navier-Stokes equation in two-dimensional coordinate system (r, z) is used with boundary-layer theory. Results are obtained for various values of Reynolds number, Womersley number, Forchsheimer number, particle aspect ratio and shape factor. Numerical computation is done by using finite difference scheme for very fine mesh in MATLAB. It is found that the overall air velocity is significantly increased by changes in aerodynamic diameter, aspect ratio, alveoli size, Reynolds number and the pulse rate; while velocity is decreased by increasing Forchheimer number.Keywords: deposition, interstitial lung diseases, non-Darcian medium, numerical simulation, shape factor
Procedia PDF Downloads 1866638 Review of K0-Factors and Related Nuclear Data of the Selected Radionuclides for Use in K0-NAA
Authors: Manh-Dung Ho, Van-Giap Pham, Van-Doanh Ho, Quang-Thien Tran, Tuan-Anh Tran
Abstract:
The k0-factors and related nuclear data, i.e. the Q0-factors and effective resonance energies (Ēr) of the selected radionuclides which are used in the k0-based neutron activation analysis (k0-NAA), were critically reviewed to be integrated in the “k0-DALAT” software. The k0- and Q0-factors of some short-lived radionuclides: 46mSc, 110Ag, 116m2In, 165mDy, and 183mW, were experimentally determined at the Dalat research reactor. The other radionuclides selected are: 20F, 36S, 49Ca, 60mCo, 60Co, 75Se, 77mSe, 86mRb, 115Cd, 115mIn, 131Ba, 134mCs, 134Cs, 153Gd, 153Sm, 159Gd, 170Tm, 177mYb, 192Ir, 197mHg, 239U and 239Np. The reviewed data as compared with the literature data were biased within 5.6-7.3% in which the experimental re-determined factors were within 6.1 and 7.3%. The NIST standard reference materials: Oyster Tissue (1566b), Montana II Soil (2711a) and Coal Fly Ash (1633b) were used to validate the new reviewed data showing that the new data gave an improved k0-NAA using the “k0-DALAT” software with a factor of 4.5-6.8% for the investigated radionuclides.Keywords: neutron activation analysis, k0-based method, k0 factor, Q0 factor, effective resonance energy
Procedia PDF Downloads 1266637 Impact of Dynamic Capabilities on Knowledge Management Processes
Authors: Farzad Yavari, Fereydoun Ohadi
Abstract:
Today, with the development and growth of technology and extreme environmental changes, organizations need to identify opportunities and create creativity and innovation in order to be able to maintain or improve their position in competition with others. In this regard, it is necessary that the resources and assets of the organization are coordinated and reviewed in accordance with the orientation of the strategy. One of the competitive advantages of the present age is knowledge management, which is to equip the organization with the knowledge of the day and disseminate among employees and use it in the development of products and services. Therefore, in the forthcoming research, the impact of dynamic capabilities components (sense, seize, and reconfiguration) has been investigated on knowledge management processes (acquisition, integration and knowledge utilization) in the MAPNA Engineering and Construction Company using a field survey and applied research method. For this purpose, a questionnaire was filled out in the form of 15 questions for dynamic components and 15 questions for measuring knowledge management components and distributed among 46 employees of the knowledge management organization. Validity of the questionnaire was evaluated through content validity and its reliability with Cronbach's coefficient. Pearson correlation test and structural equation technique were used to analyze the data. The results of the research indicate a positive significant correlation between the components of dynamic capabilities and knowledge management.Keywords: dynamic capabilities, knowledge management, sense capability, seize capability, reconfigurable capability, knowledge acquisition, knowledge integrity, knowledge utilization
Procedia PDF Downloads 1216636 Biotransformation of Glycerine Pitch as Renewable Carbon Resource into P(3HB-co-4HB) Biopolymer
Authors: Amirul Al-Ashraf Abdullah, Hema Ramachandran, Iszatty Ismail
Abstract:
Oleochemical industry in Malaysia has been diversifying significantly due to the abundant supply of both palm and kernel oils as raw materials as well as the high demand for downstream products such as fatty acids, fatty alcohols and glycerine. However, environmental awareness is growing rapidly in Malaysia because oleochemical industry is one of the palm-oil based industries that possess risk to the environment. Glycerine pitch is one of the scheduled wastes generated from the fatty acid plants in Malaysia and its discharge may cause a serious environmental problem. Therefore, it is imperative to find alternative applications for this waste glycerine. Consequently, the aim of this research is to explore the application of glycerine pitch as direct fermentation substrate in the biosynthesis of poly(3-hydroxybutyrate-co-4-hydroxybutyrate) [P(3HB-co-4HB)] copolymer, aiming to contribute toward the sustainable production of biopolymer in the world. Utilization of glycerine pitch (10 g/l) together with 1,4-butanediol (5 g/l) had resulted in the achievement of 40 mol% 4HB monomer with the highest PHA concentration of 2.91 g/l. Synthesis of yellow pigment which exhibited antimicrobial properties occurred simultaneously with the production of P(3HB-co-4HB) through the use of glycerine pitch as renewable carbon resource. Utilization of glycerine pitch in the biosynthesis of P(3HB-co-4HB) will not only contribute to reducing society’s dependence on non-renewable resources but also will promote the development of cost efficiency microbial fermentation towards biosustainability and green technology.Keywords: biopolymer, glycerine pitch, natural pigment, P(3HB-co-4HB)
Procedia PDF Downloads 4726635 Quality Standards for Emergency Response: A Methodological Framework
Authors: Jennifer E. Lynette
Abstract:
This study describes the development process of a methodological framework for quality standards used to measure the efficiency and quality of response efforts of trained personnel at emergency events. This paper describes the techniques used to develop the initial framework and its potential application to professions under the broader field of emergency management. The example described in detail in this paper applies the framework specifically to fire response activities by firefighters. Within the quality standards framework, the fire response process is chronologically mapped. Individual variables within the sequence of events are identified. Through in-person data collection, questionnaires, interviews, and the expansion of the incident reporting system, this study identifies and categorizes previously unrecorded variables involved in the response phase of a fire. Following a data analysis of each variable using a quantitative or qualitative assessment, the variables are ranked pertaining to the magnitude of their impact to the event outcome. Among others, key indicators of quality performance in the analysis involve decision communication, resource utilization, response techniques, and response time. Through the application of this framework and subsequent utilization of quality standards indicators, there is potential to increase efficiency in the response phase of an emergency event; thereby saving additional lives, property, and resources.Keywords: emergency management, fire, quality standards, response
Procedia PDF Downloads 3206634 Comparison of the Factor of Safety and Strength Reduction Factor Values from Slope Stability Analysis of a Large Open Pit
Authors: James Killian, Sarah Cox
Abstract:
The use of stability criteria within geotechnical engineering is the way the results of analyses are conveyed, and sensitivities and risk assessments are performed. Historically, the primary stability criteria for slope design has been the Factor of Safety (FOS) coming from a limit calculation. Increasingly, the value derived from Strength Reduction Factor (SRF) analysis is being used as the criteria for stability analysis. The purpose of this work was to study in detail the relationship between SRF values produced from a numerical modeling technique and the traditional FOS values produced from Limit Equilibrium (LEM) analyses. This study utilized a model of a 3000-foot-high slope with a 45-degree slope angle, assuming a perfectly plastic mohr-coulomb constitutive model with high cohesion and friction angle values typical of a large hard rock mine slope. A number of variables affecting the values of the SRF in a numerical analysis were tested, including zone size, in-situ stress, tensile strength, and dilation angle. This paper demonstrates that in most cases, SRF values are lower than the corresponding LEM FOS values. Modeled zone size has the greatest effect on the estimated SRF value, which can vary as much as 15% to the downside compared to FOS. For consistency when using SRF as a stability criteria, the authors suggest that numerical model zone sizes should not be constructed to be smaller than about 1% of the overall problem slope height and shouldn’t be greater than 2%. Future work could include investigations of the effect of anisotropic strength assumptions or advanced constitutive models.Keywords: FOS, SRF, LEM, comparison
Procedia PDF Downloads 3126633 A Study on the Coefficient of Transforming Relative Lateral Displacement under Linear Analysis of Structure to Its Real Relative Lateral Displacement
Authors: Abtin Farokhipanah
Abstract:
In recent years, analysis of structures is based on ductility design in contradictory to strength design in surveying earthquake effects on structures. ASCE07-10 code offers to intensify relative drifts calculated from a linear analysis with Cd which is called (Deflection Amplification Factor) to obtain the real relative drifts which can be calculated using nonlinear analysis. This lateral drift should be limited to the code boundaries. Calculation of this amplification factor for different structures, comparing with ASCE07-10 code and offering the best coefficient are the purposes of this research. Following our target, short and tall building steel structures with various earthquake resistant systems in linear and nonlinear analysis should be surveyed, so these questions will be answered: 1. Does the Response Modification Coefficient (R) have a meaningful relation to Deflection Amplification Factor? 2. Does structure height, seismic zone, response spectrum and similar parameters have an effect on the conversion coefficient of linear analysis to real drift of structure? The procedure has used to conduct this research includes: (a) Study on earthquake resistant systems, (b) Selection of systems and modeling, (c) Analyzing modeled systems using linear and nonlinear methods, (d) Calculating conversion coefficient for each system and (e) Comparing conversion coefficients with the code offered ones and concluding results.Keywords: ASCE07-10 code, deflection amplification factor, earthquake engineering, lateral displacement of structures, response modification coefficient
Procedia PDF Downloads 3546632 Examining the Influence of Firm Internal Level Factors on Performance Variations among Micro and Small Enterprises: Evidence from Tanzanian Agri-Food Processing Firms
Authors: Pulkeria Pascoe, Hawa P. Tundui, Marcia Dutra de Barcellos, Hans de Steur, Xavier Gellynck
Abstract:
A majority of Micro and Small Enterprises (MSEs) experience low or no growth. Understanding their performance remains unfinished and disjointed as there is no consensus on the factors influencing it, especially in developing countries. Using a Resource-Based View (RBV) as the theoretical background, this cross-sectional study employed four regression models to examine the influence of firm-level factors (firm-specific characteristics, firm resources, manager socio-demographic characteristics, and selected management practices) on the overall performance variations among 442 Tanzanian micro and small agri-food processing firms. Study results confirmed the RBV argument that intangible resources make a larger contribution to overall performance variations among firms than that tangible resources. Firms' tangible and intangible resources explained 34.5% of overall performance variations (intangible resources explained the overall performance variability by 19.4% compared to tangible resources, which accounted for 15.1%), ranking first in explaining the overall performance variance. Firm-specific characteristics ranked second by influencing variations in overall performance by 29.0%. Selected management practices ranked third (6.3%), while the manager's socio-demographic factors were last on the list, as they influenced the overall performance variability among firms by only 5.1%. The study also found that firms that focus on proper utilization of tangible resources (financial and physical), set targets, and undertake better working capital management practices performed higher than their counterparts (low and average performers). Furthermore, accumulation and proper utilization of intangible resources (relational, organizational, and reputational), undertaking performance monitoring practices, age of the manager, and the choice of the firm location and activity were the dominant significant factors influencing the variations among average and high performers, relative to low performers. The entrepreneurial background was a significant factor influencing variations in average and low-performing firms, indicating that entrepreneurial skills are crucial to achieving average levels of performance. Firm age, size, legal status, source of start-up capital, gender, education level, and total business experience of the manager were not statistically significant variables influencing the overall performance variations among the agri-food processors under the study. The study has identified both significant and non-significant factors influencing performance variations among low, average, and high-performing micro and small agri-food processing firms in Tanzania. Therefore, results from this study will help managers, policymakers and researchers to identify areas where more attention should be placed in order to improve overall performance of MSEs in agri-food industry.Keywords: firm-level factors, micro and small enterprises, performance, regression analysis, resource-based-view
Procedia PDF Downloads 876631 A Post-Occupancy Evaluation of LEED-Certified Residential Communities Using Structural Equation Modeling
Authors: Mohsen Goodarzi, George Berghorn
Abstract:
Despite the rapid growth in the number of green building and community development projects, the long-term performance of these projects has not yet been sufficiently evaluated from the users’ points of view. This is partially due to the lack of post-occupancy evaluation tools available for this type of project. In this study, a post-construction evaluation model is developed to evaluate the relationship between the perceived performance and satisfaction of residents in LEED-certified residential buildings and communities. To develop this evaluation model, a primary five-factor model was developed based on the existing models and residential satisfaction theories. Each factor of the model included several measures that were adopted from LEED certification systems such as LEED-BD+C New Construction, LEED-BD+C Multifamily Midrise, LEED-ND, as well as the UC Berkeley’s Center for the Built Environment survey tool. The model included four predictor variables (factors), including perceived building performance (8 measures), perceived infrastructure performance (9 measures), perceived neighborhood design (6 measures), and perceived economic performance (4 measures), and one dependent variable (factor), which was residential satisfaction (6 measures). An online survey was then conducted to collect the data from the residents of LEED-certified residential communities (n=192) and the validity of the model was tested through Confirmatory Factor Analysis (CFA). After modifying the CFA model, 26 measures, out of the initial 33 measures, were retained to enter into a Structural Equation Model (SEM) and to find the relationships between the perceived buildings performance, infrastructure performance, neighborhood design, economic performance and residential Satisfaction. The results of the SEM showed that the perceived building performance was the most influential factor in determining residential satisfaction in LEED-certified communities, followed by the perceived neighborhood design. On the other hand, perceived infrastructure performance and perceived economic performance did not show any significant relationship with residential satisfaction in these communities. This study can benefit green building researchers by providing a model for the evaluation of the long-term performance of these projects. It can also provide opportunities for green building practitioners to determine priorities for future residential development projects.Keywords: green building, residential satisfaction, perceived performance, confirmatory factor analysis, structural equation modeling
Procedia PDF Downloads 2406630 Awareness and Perception of Food Safety, Nutrition and Food Security among Omani Women
Authors: Abeer Al Kalbani
Abstract:
Oman is a sub-tropical country with limited water resources, harsh weather and limited soil fertility, constraining food production. Therefore, it largely depends on international markets to assure supply of food. In the light of these circumstances, food security in Oman is defined as the ability of the country to grant the staple food needs of people (e.g. rice, wheat, lentil, sugar, dates, dairy products, fish and plant or vegetable oils). It also involves exporting local goods with high production rates to exchange them with required food products. This concept of food security includes the availability of food through production and/or importing, stability of the market prices during all circumstances, and the ability of people to meet their needs within their income capabilities. As a result, most of the food security work is focused on availability and access dimensions of the issue. Not much research is focused on the utilization aspect of food security in Oman. Although women play a vital role in food security, there is limited research on women’s role in food security neither in Oman nor in neighboring Gulf countries. Women play an important role not only by carrying the responsibility of feeding their families but also by setting the consumption model for the household. Therefore, the research aims to contribute to the work done on food security in Oman and similar regions of the world by studying the role women play at the utilization level. Methods used in this research include Qualitative unstructured interviews, focus groups, survey questionnaire and an experimental study. Based on the FAO definition of food security, it consists of availability, access, utilization and sustainability. Results from a pilot study conducted for this research on two groups of women in Oman; urban and rural women, showed that women in Oman are responsible for achieving these four pillars at the household level. Moreover, awareness of women increased as their educational level increased. Urban women showed more awareness and openness to adopt healthier and proper food related choices than rural women. Urban women seem also more open than rural women to new ideas and concepts and ways to healthier food. However, both urban and rural women claim that no training and educational programs are available for them and awareness of food security in general remains relatively low in both groups. In the light of these findings, this research attempts to further investigate the social beliefs, practices and attitudes women adopt in relation to food purchase, storage, preparation and consumption as considered as important parts of the food system. It also seeks to examine the effect of educational training programs and media on the level of women awareness on the issue.Keywords: food security, household food security, utilization, role of women
Procedia PDF Downloads 4076629 Standardization of the Behavior Assessment System for Children-2, Parent Rating Scales - Adolescent Form (K BASC-2, PRS-A) among Korean Sample
Authors: Christine Myunghee Ahn, Sung Eun Baek, Sun Young Park
Abstract:
The purpose of this study was to evaluate the cross-cultural validity of the Korean version of the Behavioral Assessment System for Children 2nd Edition, Parent Rating Scales - Adolescent Form (K BASC-2, PRS-A). The 150-item K BASC-2, PRS-A questionnaire was administered to a total of 690 Korean parents or caregivers (N=690) of adolescent children in middle school and high school. Results from the confirmatory and exploratory factor analyses indicate that the K BASC-2, PRS-A yielded a 3-factor solution similar to the factor structure found in the original version of the BASC-2. The internal consistencies using the Cronbach’s alpha of the composite scale scores were in the .92~ .98 range. The overall reliability and validity of the K BASC-2, PRS-A seem adequate. Structural equation modeling was used to verify the theoretical relationship among the scales of Adaptability, Withdrawal, Somatization, Depression, and Anxiety, to render additional support for internal validity. Other relevant findings, practical implications regarding the use of the KBASC-2, PRS-A and suggestions for future research are discussed.Keywords: behavioral assessment system, cross-cultural validity, parent report, screening
Procedia PDF Downloads 4896628 A Cosmic Time Dilation Model for the Week of Creation
Authors: Kwok W. Cheung
Abstract:
A scientific interpretation of creation reconciling the beliefs of six literal days of creation and a 13.7-billion-year-old universe currently perceived by most modern cosmologists is proposed. We hypothesize that the reference timeframe of God’s creation is associated with some cosmic time different from the earth's time. We show that the scale factor of earth time to cosmic time can be determined by the solution of the Friedmann equations. Based on this scale factor and some basic assumptions, we derive a Cosmic Time Dilation model that harmonizes the literal meaning of creation days and scientific discoveries with remarkable accuracy.Keywords: cosmological expansion, time dilation, creation, genesis, relativity, Big Bang, biblical hermeneutics
Procedia PDF Downloads 946627 Application of Random Forest Model in The Prediction of River Water Quality
Authors: Turuganti Venkateswarlu, Jagadeesh Anmala
Abstract:
Excessive runoffs from various non-point source land uses, and other point sources are rapidly contaminating the water quality of streams in the Upper Green River watershed, Kentucky, USA. It is essential to maintain the stream water quality as the river basin is one of the major freshwater sources in this province. It is also important to understand the water quality parameters (WQPs) quantitatively and qualitatively along with their important features as stream water is sensitive to climatic events and land-use practices. In this paper, a model was developed for predicting one of the significant WQPs, Fecal Coliform (FC) from precipitation, temperature, urban land use factor (ULUF), agricultural land use factor (ALUF), and forest land-use factor (FLUF) using Random Forest (RF) algorithm. The RF model, a novel ensemble learning algorithm, can even find out advanced feature importance characteristics from the given model inputs for different combinations. This model’s outcomes showed a good correlation between FC and climate events and land use factors (R2 = 0.94) and precipitation and temperature are the primary influencing factors for FC.Keywords: water quality, land use factors, random forest, fecal coliform
Procedia PDF Downloads 1986626 Study of seum Tumor Necrosis Factor Alpha in Pediatric Patients with Hemophilia A
Authors: Sara Mohammad Atef Sabaika
Abstract:
Background: The development of factor VIII (FVIII) inhibitor and hemophilic arthropathy in patients with hemophilia A (PWHA) are a great challenge for hemophilia care. Both genetic and environmental factors led to complications in PWHA. The development of inhibitory antibodies is usually induced by the immune response. Tumor necrosis factor α (TNF-α), one of the cytokines, might contribute to its polymorphism. Aim: Study the association between tumor necrosis alpha level and genotypes in pediatric patients with hemophilia A and its relation to inhibitor development and joint status. Methods: A cross-sectional study was conducted among a sufficient number of PWHA attending the Pediatric Hematology and Oncology Unit, Pediatric department in Menoufia University hospital. The clinical parameters, FVIII, FVIII inhibitor, and serum TNF-α level were assessed. The genotyping of −380G > A TNF-α gene polymorphism was performed using real time polymerase chain reaction. Results: Among the 50 PWHA, 28 (56%) were identified as severe PWHA. The FVIII inhibitor was identified in 6/28 (21.5%) of severe PWHA. There was a significant correlation between serum TNF-α level and the development of inhibitor (p = 0:043). There was significant correlation between polymorphisms of −380G > A TNF-α gene and hemophilic arthropathy development (p = 0:645). Conclusion: The prevalence of FVIII inhibitor in severe PWHA in Menoufia was 21.5%. The frequency of replacement therapy is a risk factor for inhibitor development. Serum TNF-α level and its gene polymorphism might be used to predict inhibitor development and joint status in pediatric patients with hemophilia A.Keywords: hemophilic arthropathy, TNF alpha., patients witb hemophilia A PWHA, inhibitor
Procedia PDF Downloads 956625 Costa and Mccrae's Neo-Pi Factor and Early Adolescents School Social Adjustment in Cross River State Nigeria
Authors: Peter Unoh Bassey
Abstract:
The study examined the influence of Costa and McCrae’s Neo-PI Factor and early adolescent’s school social adjustment in Cross River State, Nigeria. The research adopted the causal-comparative design also known as the ex-post facto with about one thousand and eighteen (1,018) students who were randomly selected from one stream of JSS 1 classes in 19 schools out of seventy-three (73) in the study area. Data were collected using two instruments one is the NEO-PI scale, and students school social adjustment questionnaire. Three research questions and three research hypotheses were postulated and tested at 0.05 level of significance. The analysis of data was carried out using both the independent t-test statistics and the one-way analysis of variance (ANOVA). The analyzed result indicated that the five dimensions had a significant influence on students school social adjustment. A post hoc was equally carried out to show the relative significant difference among the study variables. In view of the above, it was recommended that teachers, parents and educational psychologists should be involved to enhance students the confidence to overcome their social adjustment problem.Keywords: Costa and McCrae’s NEO-PI Factor, early adolescents, school, social adjustment
Procedia PDF Downloads 1486624 Study of the Responding Time for Low Permeability Reservoirs
Authors: G. Lei, P. C. Dong, X. Q. Cen, S. Y. Mo
Abstract:
One of the most significant parameters, describing the effect of water flooding in porous media, is flood-response time, and it is an important index in oilfield development. The responding time in low permeability reservoir is usually calculated by the method of stable state successive substitution neglecting the effect of medium deformation. Numerous studies show that the media deformation has an important impact on the development for low permeability reservoirs and can not be neglected. On the base of streamline tube model, we developed a method to interpret responding time with medium deformation factor. The results show that: the media deformation factor, threshold pressure gradient and well spacing have a significant effect on the flood response time. The greater the media deformation factor, threshold pressure gradient or well spacing is, the lower the flood response time is. The responding time of different streamlines varies. As the angle with the main streamline increases, the water flooding response time delays as a "parabola" shape.Keywords: low permeability, flood-response time, threshold pressure gradient, medium deformation
Procedia PDF Downloads 5006623 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.
Authors: Zabeehullah, Fahim Arif, Yawar Abbas
Abstract:
Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.Keywords: SDN, IoT, DL, ML, DRS
Procedia PDF Downloads 1136622 Effect of Modulation Factors on Tomotherapy Plans and Their Quality Analysis
Authors: Asawari Alok Pawaskar
Abstract:
This study was aimed at investigating quality assurance (QA) done with IBA matrix, the discrepancies observed for helical tomotherapy plans. A selection of tomotherapy plans that initially failed the with Matrix process was chosen for this investigation. These plans failed the fluence analysis as assessed using gamma criteria (3%, 3 mm). Each of these plans was modified (keeping the planning constraints the same), beamlets rebatched and reoptimized. By increasing and decreasing the modulation factor, the fluence in a circumferential plane as measured with a diode array was assessed. A subset of these plans was investigated using varied pitch values. Factors for each plan that were examined were point doses, fluences, leaf opening times, planned leaf sinograms, and uniformity indices. In order to ensure that the treatment constraints remained the same, the dose-volume histograms (DVHs) of all the modulated plans were compared to the original plan. It was observed that a large increase in the modulation factor did not significantly improve DVH uniformity, but reduced the gamma analysis pass rate. This also increased the treatment delivery time by slowing down the gantry rotation speed which then increases the maximum to mean non-zero leaf open time ratio. Increasing and decreasing the pitch value did not substantially change treatment time, but the delivery accuracy was adversely affected. This may be due to many other factors, such as the complexity of the treatment plan and site. Patient sites included in this study were head and neck, breast, abdomen. The impact of leaf timing inaccuracies on plans was greater with higher modulation factors. Point-dose measurements were seen to be less susceptible to changes in pitch and modulation factors. The initial modulation factor used by the optimizer, such that the TPS generated ‘actual’ modulation factor within the range of 1.4 to 2.5, resulted in an improved deliverable plan.Keywords: dose volume histogram, modulation factor, IBA matrix, tomotherapy
Procedia PDF Downloads 1786621 Volatility Index, Fear Sentiment and Cross-Section of Stock Returns: Indian Evidence
Authors: Pratap Chandra Pati, Prabina Rajib, Parama Barai
Abstract:
The traditional finance theory neglects the role of sentiment factor in asset pricing. However, the behavioral approach to asset-pricing based on noise trader model and limit to arbitrage includes investor sentiment as a priced risk factor in the assist pricing model. Investor sentiment affects stock more that are vulnerable to speculation, hard to value and risky to arbitrage. It includes small stocks, high volatility stocks, growth stocks, distressed stocks, young stocks and non-dividend-paying stocks. Since the introduction of Chicago Board Options Exchange (CBOE) volatility index (VIX) in 1993, it is used as a measure of future volatility in the stock market and also as a measure of investor sentiment. CBOE VIX index, in particular, is often referred to as the ‘investors’ fear gauge’ by public media and prior literature. The upward spikes in the volatility index are associated with bouts of market turmoil and uncertainty. High levels of the volatility index indicate fear, anxiety and pessimistic expectations of investors about the stock market. On the contrary, low levels of the volatility index reflect confident and optimistic attitude of investors. Based on the above discussions, we investigate whether market-wide fear levels measured volatility index is priced factor in the standard asset pricing model for the Indian stock market. First, we investigate the performance and validity of Fama and French three-factor model and Carhart four-factor model in the Indian stock market. Second, we explore whether India volatility index as a proxy for fearful market-based sentiment indicators affect the cross section of stock returns after controlling for well-established risk factors such as market excess return, size, book-to-market, and momentum. Asset pricing tests are performed using monthly data on CNX 500 index constituent stocks listed on the National stock exchange of India Limited (NSE) over the sample period that extends from January 2008 to March 2017. To examine whether India volatility index, as an indicator of fear sentiment, is a priced risk factor, changes in India VIX is included as an explanatory variable in the Fama-French three-factor model as well as Carhart four-factor model. For the empirical testing, we use three different sets of test portfolios used as the dependent variable in the in asset pricing regressions. The first portfolio set is the 4x4 sorts on the size and B/M ratio. The second portfolio set is the 4x4 sort on the size and sensitivity beta of change in IVIX. The third portfolio set is the 2x3x2 independent triple-sorting on size, B/M and sensitivity beta of change in IVIX. We find evidence that size, value and momentum factors continue to exist in Indian stock market. However, VIX index does not constitute a priced risk factor in the cross-section of returns. The inseparability of volatility and jump risk in the VIX is a possible explanation of the current findings in the study.Keywords: India VIX, Fama-French model, Carhart four-factor model, asset pricing
Procedia PDF Downloads 2546620 Numerical and Comparative Analysis between Two Composite Plates Notched in Different Shapes and Repaired by Composite
Authors: Amari Khaoula, Berrahou Mohamed
Abstract:
The topic of our article revolves around a numerical and comparative analysis between two notched Boron/epoxy plates that are U-shaped and the other V-shaped, cracked, and repaired by a rectangular patch of the same composite material; the finite element method was used for the analytical study and comparison of the results obtained for determining the optimal shape of notch which will give a longer life to the repair. In this context, we studied the variation of the stress intensity factor, the evolution of the damaged area, and the calculation of the ratio of the damaged area according to the crack length and the concentration of the Von Mises stresses as a function of the lengths of the paths. According to the results obtained, we conclude that the notch plate U is the optimal one than notch plate V because it has lower values either for the stress intensity factor (SIF), damaged area ratio (Dᵣ), or the Von Mises stresses.Keywords: the notch U, the notch V, the finite element method FEM, comparison, rectangular patch, composite, stress intensity factor, damaged area ratio, Von Mises stresses
Procedia PDF Downloads 1006619 The Prototype of the Solar Energy Utilization for the Finding Sustainable Conditions in the Future: The Solar Community with 4000 Dwellers 960 Families, equal to 480 Solar Dwelling Houses and 32 Mansion Buildings (480 Dwellers)
Authors: Kunihisa Kakumoto
Abstract:
This technical paper is for the prototype of solar energy utilization for finding sustainable conditions. This model has been simulated under the climate conditions in Japan. At the beginning of the study, the solar model house was built up on site. And the concerned data was collected in this model house for several years. On the basis of these collected data, the concept on the solar community was built up. For the finding sustainable conditions, the amount of the solar energy generation and its reduction of carbon dioxide and the reduction of carbon dioxide by the green planting and the amount of carbon dioxide according to the normal daily life in the solar community and the amount of the necessary water for the daily life in the solar community and the amount of the water supply by the rainfall on-site were calculated. These all values were taken into consideration. The relations between each calculated result are shown in the expression of inequality. This solar community and its consideration for finding sustainable conditions can be one prototype to do the feasibility study for our life in the futureKeywords: carbon dioxide, green planting, smart city, solar community, sustainable condition, water activity
Procedia PDF Downloads 2886618 Core Stability Index for Healthy Young Sri Lankan Population
Authors: V. M. B. K. T. Malwanage, S. Samita
Abstract:
Core stability is one of the major determinants that contribute to preventing injuries, enhance performance, and improve quality of life of the human. Endurance of the four major muscle groups of the central ‘core’ of the human body is identified as the most reliable determinant of core stability amongst the other numerous causes which contribute to readily make one’s core stability. This study aimed to develop a ‘Core Stability Index’ to confer a single value for an individual’s core stability based on the four endurance test scores. Since it is possible that at least some of the test scores are not independent, possibility of constructing a single index using the multivariate method exploratory factor analysis was investigated in the study. The study sample was consisted of 400 healthy young individuals with the mean age of 23.74 ± 1.51 years and mean BMI (Body Mass Index) of 21.1 ± 4.18. The correlation analysis revealed highly significant (P < 0.0001) correlations between test scores and thus construction an index using these highly inter related test scores using the technique factor analysis was justified. The mean values of all test scores were significantly different between males and females (P < 0.0001), and therefore two separate core stability indices were constructed for the two gender groups. Moreover, having eigen values 3.103 and 2.305 for males and females respectively, indicated one factor exists for all four test scores and thus a single factor based index was constructed. The 95% reference intervals constructed using the index scores were -1.64 to 2.00 and -1.56 to 2.29 for males and females respectively. These intervals can effectively be used to diagnose those who need improvement in core stability. The practitioners should find that with a single value measure, they could be more consistent among themselves.Keywords: construction of indices, endurance test scores, muscle endurance, quality of life
Procedia PDF Downloads 1646617 Locally Produced Solid Biofuels – Carbon Dioxide Emissions and Competitiveness with Conventional Ways of Individual Space Heating
Authors: Jiri Beranovsky, Jaroslav Knapek, Tomas Kralik, Kamila Vavrova
Abstract:
The paper deals with the results of research focused on the complex aspects of the use of intentionally grown biomass on agricultural land for the production of solid biofuels as an alternative for individual household heating. . The study primarily deals with the analysis of CO2 emissions of the logistics cycle of biomass for the production of energy pellets. Growing, harvesting, transport and storage are evaluated in the pellet production cycle. The aim is also to take into account the consumption profile during the year in terms of heating of common family houses, which are typical end-market segment for these fuels. It is assumed that in family houses, bio-pellets are able to substitute typical fossil fuels, such as brown coal and old wood burning heating devices and also electric boilers. One of the competing technology with the pellets are heat pumps. The results show the CO2 emissions related with considered fuels and technologies for their utilization. Comparative analysis is aimed biopellets from intentionally grown biomass, brown coal, natural gas and electricity used in electric boilers and heat pumps. Analysis combines CO2 emissions related with individual fuels utilization with costs of these fuels utilization. Cost of biopellets from intentionally grown biomass is derived from the economic models of individual energy crop plantations. At the same time, the restrictions imposed by EU legislation on Ecodesign's fuel and combustion equipment requirements and NOx emissions are discussed. Preliminary results of analyzes show that to achieve the competitiveness of pellets produced from specifically grown biomass, it would be necessary to either significantly ecological tax on coal (from about 0.3 to 3-3.5 EUR/GJ), or to multiply the agricultural subsidy per area. In addition to the Czech Republic, the results are also relevant for other countries, such as Bulgaria and Poland, which also have a high proportion of solid fuels for household heating.Keywords: CO2 emissions, heating costs, energy crop, pellets, brown coal, heat pumps, economical evaluation
Procedia PDF Downloads 114