Search results for: Effective dose
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2706

Search results for: Effective dose

336 The Direct and Indirect Effects of the Achievement Motivation on Nurturing Intellectual Giftedness

Authors: Al-Shabatat, M. Ahmad, Abbas, M., Ismail, H. Nizam

Abstract:

Achievement motivation is believed to promote giftedness attracting people to invest in many programs to adopt gifted students providing them with challenging activities. Intellectual giftedness is founded on the fluid intelligence and extends to more specific abilities through the growth and inputs from the achievement motivation. Acknowledging the roles played by the motivation in the development of giftedness leads to an effective nurturing of gifted individuals. However, no study has investigated the direct and indirect effects of the achievement motivation and fluid intelligence on intellectual giftedness. Thus, this study investigated the contribution of motivation factors to giftedness development by conducting tests of fluid intelligence using Cattell Culture Fair Test (CCFT) and analytical abilities using culture reduced test items covering problem solving, pattern recognition, audio-logic, audio-matrices, and artificial language, and self report questionnaire for the motivational factors. A number of 180 highscoring students were selected using CCFT from a leading university in Malaysia. Structural equation modeling was employed using Amos V.16 to determine the direct and indirect effects of achievement motivation factors (self confidence, success, perseverance, competition, autonomy, responsibility, ambition, and locus of control) on the intellectual giftedness. The findings showed that the hypothesized model fitted the data, supporting the model postulates and showed significant and strong direct and indirect effects of the motivation and fluid intelligence on the intellectual giftedness.

Keywords: Achievement motivation, Intellectual Giftedness, Fluid Intelligence, Analytical Giftedness, CCFT, Structural EquationModeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2172
335 Multi-Agent Systems Applied in the Modeling and Simulation of Biological Problems: A Case Study in Protein Folding

Authors: Pedro Pablo González Pérez, Hiram I. Beltrán, Arturo Rojo-Domínguez, Máximo EduardoSánchez Gutiérrez

Abstract:

Multi-agent system approach has proven to be an effective and appropriate abstraction level to construct whole models of a diversity of biological problems, integrating aspects which can be found both in "micro" and "macro" approaches when modeling this type of phenomena. Taking into account these considerations, this paper presents the important computational characteristics to be gathered into a novel bioinformatics framework built upon a multiagent architecture. The version of the tool presented herein allows studying and exploring complex problems belonging principally to structural biology, such as protein folding. The bioinformatics framework is used as a virtual laboratory to explore a minimalist model of protein folding as a test case. In order to show the laboratory concept of the platform as well as its flexibility and adaptability, we studied the folding of two particular sequences, one of 45-mer and another of 64-mer, both described by an HP model (only hydrophobic and polar residues) and coarse grained 2D-square lattice. According to the discussion section of this piece of work, these two sequences were chosen as breaking points towards the platform, in order to determine the tools to be created or improved in such a way to overcome the needs of a particular computation and analysis of a given tough sequence. The backwards philosophy herein is that the continuous studying of sequences provides itself important points to be added into the platform, to any time improve its efficiency, as is demonstrated herein.

Keywords: multi-agent systems, blackboard-based agent architecture, bioinformatics framework, virtual laboratory, protein folding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206
334 Emotional Intelligence as Predictor of Academic Success among Third Year College Students of PIT

Authors: Sonia Arradaza-Pajaron

Abstract:

College students are expected to engage in an on-the-job training or internship for completion of a course requirement prior to graduation. In this scenario, they are exposed to the real world of work outside their training institution. To find out their readiness both emotionally and academically, this study has been conducted. A descriptive-correlational research design was employed and random sampling technique method was utilized among 265 randomly selected third year college students of PIT, SY 2014-15. A questionnaire on Emotional Intelligence (bearing the four components namely; emotional literacy, emotional quotient competence, values and beliefs and emotional quotient outcomes) was fielded to the respondents and GWA was extracted from the school automate. Data collected were statistically treated using percentage, weighted mean and Pearson-r for correlation.

Results revealed that respondents’ emotional intelligence level is moderately high while their academic performance is good. A high significant relationship was found between the EI component; Emotional Literacy and their academic performance while only significant relationship was found between Emotional Quotient Outcomes and their academic performance. Therefore, if EI influences academic performance significantly when correlated, a possibility that their OJT performance can also be affected either positively or negatively. Thus, EI can be considered predictor of their academic and academic-related performance. Based on the result, it is then recommended that the institution would try to look deeply into the consideration of embedding emotional intelligence as part of the (especially on Emotional Literacy and Emotional Quotient Outcomes of the students) college curriculum. It can be done if the school shall have an effective Emotional Intelligence framework or program manned by qualified and competent teachers, guidance counselors in different colleges in its implementation.

Keywords: Academic performance, emotional intelligence, emotional literacy, emotional quotient competence, emotional quotient outcomes, values and beliefs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
333 A New Approach In Protein Folding Studies Revealed The Potential Site For Nucleation Center

Authors: Nurul Bahiyah Ahmad Khairudin, Habibah A Wahab

Abstract:

A new approach to predict the 3D structures of proteins by combining the knowledge-based method and Molecular Dynamics Simulation is presented on the chicken villin headpiece subdomain (HP-36). Comparative modeling is employed as the knowledge-based method to predict the core region (Ala9-Asn28) of the protein while the remaining residues are built as extended regions (Met1-Lys8; Leu29-Phe36) which then further refined using Molecular Dynamics Simulation for 120 ns. Since the core region is built based on a high sequence identity to the template (65%) resulting in RMSD of 1.39 Å from the native, it is believed that this well-developed core region can act as a 'nucleation center' for subsequent rapid downhill folding. Results also demonstrate that the formation of the non-native contact which tends to hamper folding rate can be avoided. The best 3D model that exhibits most of the native characteristics is identified using clustering method which then further ranked based on the conformational free energies. It is found that the backbone RMSD of the best model compared to the NMR-MDavg is 1.01 Å and 3.53 Å, for the core region and the complete protein, respectively. In addition to this, the conformational free energy of the best model is lower by 5.85 kcal/mol as compared to the NMR-MDavg. This structure prediction protocol is shown to be effective in predicting the 3D structure of small globular protein with a considerable accuracy in much shorter time compared to the conventional Molecular Dynamics simulation alone.

Keywords: 3D model, Chicken villin headpiece subdomain, Molecular dynamic simulation NMR-MDavg, RMSD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549
332 Dynamic Analysis of Porous Media Using Finite Element Method

Authors: M. Pasbani Khiavi, A. R. M. Gharabaghi, K. Abedi

Abstract:

The mechanical behavior of porous media is governed by the interaction between its solid skeleton and the fluid existing inside its pores. The interaction occurs through the interface of gains and fluid. The traditional analysis methods of porous media, based on the effective stress and Darcy's law, are unable to account for these interactions. For an accurate analysis, the porous media is represented in a fluid-filled porous solid on the basis of the Biot theory of wave propagation in poroelastic media. In Biot formulation, the equations of motion of the soil mixture are coupled with the global mass balance equations to describe the realistic behavior of porous media. Because of irregular geometry, the domain is generally treated as an assemblage of fmite elements. In this investigation, the numerical formulation for the field equations governing the dynamic response of fluid-saturated porous media is analyzed and employed for the study of transient wave motion. A finite element model is developed and implemented into a computer code called DYNAPM for dynamic analysis of porous media. The weighted residual method with 8-node elements is used for developing of a finite element model and the analysis is carried out in the time domain considering the dynamic excitation and gravity loading. Newmark time integration scheme is developed to solve the time-discretized equations which are an unconditionally stable implicit method Finally, some numerical examples are presented to show the accuracy and capability of developed model for a wide variety of behaviors of porous media.

Keywords: Dynamic analysis, Interaction, Porous media, time domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1876
331 Investigation of Effective Parameters on Pullout Capacity in Soil Nailing with Special Attention to International Design Codes

Authors: R. Ziaie Moayed, M. Mortezaee

Abstract:

An important and influential factor in design and determining the safety factor in Soil Nailing is the ultimate pullout capacity, or, in other words, bond strength. This important parameter depends on several factors such as material and soil texture, method of implementation, excavation diameter, friction angle between the nail and the soil, grouting pressure, the nail depth (overburden pressure), the angle of drilling and the degree of saturation in soil. Federal Highway Administration (FHWA), a customary regulation in the design of nailing, is considered only the effect of the soil type (or rock) and the method of implementation in determining the bond strength, which results in non-economic design. The other regulations are each of a kind, some of the parameters affecting bond resistance are not taken into account. Therefore, in the present paper, at first the relationships and tables presented by several valid regulations are presented for estimating the ultimate pullout capacity, and then the effect of several important factors affecting on ultimate Pullout capacity are studied. Finally, it was determined, the effect of overburden pressure (in method of injection with pressure), soil dilatation and roughness of the drilling surface on pullout strength is incremental, and effect of degree of soil saturation on pullout strength to a certain degree of saturation is increasing and then decreasing. therefore it is better to get help from nail pullout-strength test results and numerical modeling to evaluate the effect of parameters such as overburden pressure, dilatation, and degree of soil saturation, and so on to reach an optimal and economical design.

Keywords: Soil nailing, pullout capacity, FHWA, grout.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 689
330 Rigorous Modeling of Fixed-Bed Reactors Containing Finite Hollow Cylindrical Catalyst with Michaelis-Menten Type of Kinetics

Authors: Mohammad Asif

Abstract:

A large number of chemical, bio-chemical and pollution-control processes use heterogeneous fixed-bed reactors. The use of finite hollow cylindrical catalyst pellets can enhance conversion levels in such reactors. The absence of the pellet core can significantly lower the diffusional resistance associated with the solid phase. This leads to a better utilization of the catalytic material, which is reflected in the higher values for the effectiveness factor, leading ultimately to an enhanced conversion level in the reactor. It is however important to develop a rigorous heterogeneous model for the reactor incorporating the two-dimensional feature of the solid phase owing to the presence of the finite hollow cylindrical catalyst pellet. Presently, heterogeneous models reported in the literature invariably employ one-dimension solid phase models meant for spherical catalyst pellets. The objective of the paper is to present a rigorous model of the fixed-bed reactors containing finite hollow cylindrical catalyst pellets. The reaction kinetics considered here is the widely used Michaelis–Menten kinetics for the liquid-phase bio-chemical reactions. The reaction parameters used here are for the enzymatic degradation of urea. Results indicate that increasing the height to diameter ratio helps to improve the conversion level. On the other hand, decreasing the thickness is apparently not as effective. This could however be explained in terms of the higher void fraction of the bed that causes a smaller amount of the solid phase to be packed in the fixed-bed bio-chemical reactor.

Keywords: Fixed-bed reactor, Finite hollow cylinder, Catalyst pellet, Conversion, Michaelis-Menten kinetics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597
329 Detecting Fake News: A Natural Language Processing, Reinforcement Learning, and Blockchain Approach

Authors: Ashly Joseph, Jithu Paulose

Abstract:

In an era where misleading information may quickly circulate on digital news channels, it is crucial to have efficient and trustworthy methods to detect and reduce the impact of misinformation. This research proposes an innovative framework that combines Natural Language Processing (NLP), Reinforcement Learning (RL), and Blockchain technologies to precisely detect and minimize the spread of false information in news articles on social media. The framework starts by gathering a variety of news items from different social media sites and performing preprocessing on the data to ensure its quality and uniformity. NLP methods are utilized to extract complete linguistic and semantic characteristics, effectively capturing the subtleties and contextual aspects of the language used. These features are utilized as input for a RL model. This model acquires the most effective tactics for detecting and mitigating the impact of false material by modeling the intricate dynamics of user engagements and incentives on social media platforms. The integration of blockchain technology establishes a decentralized and transparent method for storing and verifying the accuracy of information. The Blockchain component guarantees the unchangeability and safety of verified news records, while encouraging user engagement for detecting and fighting false information through an incentive system based on tokens. The suggested framework seeks to provide a thorough and resilient solution to the problems presented by misinformation in social media articles.

Keywords: Natural Language Processing, Reinforcement Learning, Blockchain, fake news mitigation, misinformation detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84
328 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
327 Windphil Poetic in Architecture: Energy Efficient Strategies in Modern Buildings of Iran

Authors: Sepideh Samadzadehyazdi, Mohammad Javad Khalili, Sarvenaz Samadzadehyazdi, Mohammad Javad Mahdavinejad

Abstract:

The term ‘Windphil Architecture’ refers to the building that facilitates natural ventilation by architectural elements. Natural ventilation uses the natural forces of wind pressure and stacks effect to direct the movement of air through buildings. Natural ventilation is increasingly being used in contemporary buildings to minimize the consumption of non-renewable energy and it is an effective way to improve indoor air quality. The main objective of this paper is to identify the strategies of using natural ventilation in Iranian modern buildings. In this regard, the research method is ‘descriptive-analytical’ that is based on comparative techniques. To simulate wind flow in the interior spaces of case studies, FLUENT software has been used. Research achievements show that it is possible to use natural ventilation to create a thermally comfortable indoor environment. The natural ventilation strategies could be classified into two groups of environmental characteristics such as public space structure, and architectural characteristics including building form and orientation, openings, central courtyards, wind catchers, roof, wall wings, semi-open spaces and the heat capacity of materials. Having investigated modern buildings of Iran, innovative elements like wind catchers and wall wings are less used than the traditional architecture. Instead, passive ventilation strategies have been more considered in the building design as for the roof structure and openings.

Keywords: Natural ventilation strategies, wind catchers, wind flow, Iranian modern buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046
326 Using Dynamic Glazing to Eliminate Mechanical Cooling in Multi-family Highrise Buildings

Authors: Ranojoy Dutta, Adam Barker

Abstract:

Multifamily residential buildings are increasingly being built with large glazed areas to provide tenants with greater daylight and outdoor views. However, traditional double-glazed window assemblies can lead to significant thermal discomfort from high radiant temperatures as well as increased cooling energy use to address solar gains. Dynamic glazing provides an effective solution by actively controlling solar transmission to maintain indoor thermal comfort, without compromising the visual connection to outdoors. This study uses thermal simulations across three Canadian cities (Toronto, Vancouver and Montreal) to verify if dynamic glazing along with operable windows and ceiling fans can maintain the indoor operative temperature of a prototype southwest facing high-rise apartment unit within the ASHRAE 55 adaptive comfort range for a majority of the year, without any mechanical cooling. Since this study proposes the use of natural ventilation for cooling and the typical building life cycle is 30-40 years, the typical weather files have been modified based on accepted global warming projections for increased air temperatures by 2050. Results for the prototype apartment confirm that thermal discomfort with dynamic glazing occurs only for less than 0.7% of the year. However, in the baseline scenario with low-E glass there are up to 7% annual hours of discomfort despite natural ventilation with operable windows and improved air movement with ceiling fans.

Keywords: Electrochromic, operable windows, thermal comfort, natural ventilation, adaptive comfort.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 570
325 Bio-Surfactant Production and Its Application in Microbial EOR

Authors: A. Rajesh Kanna, G. Suresh Kumar, Sathyanaryana N. Gummadi

Abstract:

There are various sources of energies available worldwide and among them, crude oil plays a vital role. Oil recovery is achieved using conventional primary and secondary recovery methods. In-order to recover the remaining residual oil, technologies like Enhanced Oil Recovery (EOR) are utilized which is also known as tertiary recovery. Among EOR, Microbial enhanced oil recovery (MEOR) is a technique which enables the improvement of oil recovery by injection of bio-surfactant produced by microorganisms. Bio-surfactant can retrieve unrecoverable oil from the cap rock which is held by high capillary force. Bio-surfactant is a surface active agent which can reduce the interfacial tension and reduce viscosity of oil and thereby oil can be recovered to the surface as the mobility of the oil is increased. Research in this area has shown promising results besides the method is echo-friendly and cost effective compared with other EOR techniques. In our research, on laboratory scale we produced bio-surfactant using the strain Pseudomonas putida (MTCC 2467) and injected into designed simple sand packed column which resembles actual petroleum reservoir. The experiment was conducted in order to determine the efficiency of produced bio-surfactant in oil recovery. The column was made of plastic material with 10 cm in length. The diameter was 2.5 cm. The column was packed with fine sand material. Sand was saturated with brine initially followed by oil saturation. Water flooding followed by bio-surfactant injection was done to determine the amount of oil recovered. Further, the injection of bio-surfactant volume was varied and checked how effectively oil recovery can be achieved. A comparative study was also done by injecting Triton X 100 which is one of the chemical surfactant. Since, bio-surfactant reduced surface and interfacial tension oil can be easily recovered from the porous sand packed column.

Keywords: Bio-surfactant, Bacteria, Interfacial tension, Sand column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2777
324 A Hybrid Fuzzy AGC in a Competitive Electricity Environment

Authors: H. Shayeghi, A. Jalili

Abstract:

This paper presents a new Hybrid Fuzzy (HF) PID type controller based on Genetic Algorithms (GA-s) for solution of the Automatic generation Control (AGC) problem in a deregulated electricity environment. In order for a fuzzy rule based control system to perform well, the fuzzy sets must be carefully designed. A major problem plaguing the effective use of this method is the difficulty of accurately constructing the membership functions, because it is a computationally expensive combinatorial optimization problem. On the other hand, GAs is a technique that emulates biological evolutionary theories to solve complex optimization problems by using directed random searches to derive a set of optimal solutions. For this reason, the membership functions are tuned automatically using a modified GA-s based on the hill climbing method. The motivation for using the modified GA-s is to reduce fuzzy system effort and take large parametric uncertainties into account. The global optimum value is guaranteed using the proposed method and the speed of the algorithm-s convergence is extremely improved, too. This newly developed control strategy combines the advantage of GA-s and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed GA based HF (GAHF) controller is tested on a threearea deregulated power system under different operating conditions and contract variations. The results of the proposed GAHF controller are compared with those of Multi Stage Fuzzy (MSF) controller, robust mixed H2/H∞ and classical PID controllers through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes.

Keywords: AGC, Hybrid Fuzzy Controller, Deregulated Power System, Power System Control, GAs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
323 Overview Studies of High Strength Self-Consolidating Concrete

Authors: Raya Harkouss, Bilal Hamad

Abstract:

Self-Consolidating Concrete (SCC) is considered as a relatively new technology created as an effective solution to problems associated with low quality consolidation. A SCC mix is defined as successful if it flows freely and cohesively without the intervention of mechanical compaction. The construction industry is showing high tendency to use SCC in many contemporary projects to benefit from the various advantages offered by this technology.

At this point, a main question is raised regarding the effect of enhanced fluidity of SCC on the structural behavior of high strength self-consolidating reinforced concrete.

A three phase research program was conducted at the American University of Beirut (AUB) to address this concern. The first two phases consisted of comparative studies conducted on concrete and mortar mixes prepared with second generation Sulphonated Naphtalene-based superplasticizer (SNF) or third generation Polycarboxylate Ethers-based superplasticizer (PCE). The third phase of the research program investigates and compares the structural performance of high strength reinforced concrete beam specimens prepared with two different generations of superplasticizers that formed the unique variable between the concrete mixes. The beams were designed to test and exhibit flexure, shear, or bond splitting failure.

The outcomes of the experimental work revealed comparable resistance of beam specimens cast using self-compacting concrete and conventional vibrated concrete. The dissimilarities in the experimental values between the SCC and the control VC beams were minimal, leading to a conclusion, that the high consistency of SCC has little effect on the flexural, shear and bond strengths of concrete members.

Keywords: Self-consolidating concrete (SCC), high-strength concrete, concrete admixtures, mechanical properties of hardened SCC, structural behavior of reinforced concrete beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2970
322 Electrophoretic Deposition of p-Type Bi2Te3 for Thermoelectric Applications

Authors: Tahereh Talebi, Reza Ghomashchi, Pejman Talemi, Sima Aminorroaya

Abstract:

Electrophoretic deposition (EPD) of p-type Bi2Te3 material has been accomplished, and a high quality crack-free thick film has been achieved for thermoelectric (TE) applications. TE generators (TEG) can convert waste heat into electricity, which can potentially solve global warming problems. However, TEG is expensive due to the high cost of materials, as well as the complex and expensive manufacturing process. EPD is a simple and cost-effective method which has been used recently for advanced applications. In EPD, when a DC electric field is applied to the charged powder particles suspended in a suspension, they are attracted and deposited on the substrate with the opposite charge. In this study, it has been shown that it is possible to prepare a TE film using the EPD method and potentially achieve high TE properties at low cost. The relationship between the deposition weight and the EPD-related process parameters, such as applied voltage and time, has been investigated and a linear dependence has been observed, which is in good agreement with the theoretical principles of EPD. A stable EPD suspension of p-type Bi2Te3 was prepared in a mixture of acetone-ethanol with triethanolamine as a stabilizer. To achieve a high quality homogenous film on a copper substrate, the optimum voltage and time of the EPD process was investigated. The morphology and microstructures of the green deposited films have been investigated using a scanning electron microscope (SEM). The green Bi2Te3 films have shown good adhesion to the substrate. In summary, this study has shown that not only EPD of p-type Bi2Te3 material is possible, but its thick film is of high quality for TE applications.

Keywords: Electrical conductivity, electrophoretic deposition, p-type Bi2Te3, thermoelectric materials, thick films.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1007
321 Decision-Making Strategies on Smart Dairy Farms: A Review

Authors: L. Krpalkova, N. O' Mahony, A. Carvalho, S. Campbell, G. Corkery, E. Broderick, J. Walsh

Abstract:

Farm management and operations will drastically change due to access to real-time data, real-time forecasting and tracking of physical items in combination with Internet of Things (IoT) developments to further automate farm operations. Dairy farms have embraced technological innovations and procured vast amounts of permanent data streams during the past decade; however, the integration of this information to improve the whole farm decision-making process does not exist. It is now imperative to develop a system that can collect, integrate, manage, and analyze on-farm and off-farm data in real-time for practical and relevant environmental and economic actions. The developed systems, based on machine learning and artificial intelligence, need to be connected for useful output, a better understanding of the whole farming issue and environmental impact. Evolutionary Computing (EC) can be very effective in finding the optimal combination of sets of some objects and finally, in strategy determination. The system of the future should be able to manage the dairy farm as well as an experienced dairy farm manager with a team of the best agricultural advisors. All these changes should bring resilience and sustainability to dairy farming as well as improving and maintaining good animal welfare and the quality of dairy products. This review aims to provide an insight into the state-of-the-art of big data applications and EC in relation to smart dairy farming and identify the most important research and development challenges to be addressed in the future. Smart dairy farming influences every area of management and its uptake has become a continuing trend.

Keywords: Big data, evolutionary computing, cloud, precision technologies

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
320 Rotor Bearing System Analysis Using the Transfer Matrix Method with Thickness Assumption of Disk and Bearing

Authors: Omid Ghasemalizadeh, Mohammad Reza Mirzaee, Hossein Sadeghi, Mohammad Taghi Ahmadian

Abstract:

There are lots of different ways to find the natural frequencies of a rotating system. One of the most effective methods which is used because of its precision and correctness is the application of the transfer matrix. By use of this method the entire continuous system is subdivided and the corresponding differential equation can be stated in matrix form. So to analyze shaft that is this paper issue the rotor is divided as several elements along the shaft which each one has its own mass and moment of inertia, which this work would create possibility of defining the named matrix. By Choosing more elements number, the size of matrix would become larger and as a result more accurate answers would be earned. In this paper the dynamics of a rotor-bearing system is analyzed, considering the gyroscopic effect. To increase the accuracy of modeling the thickness of the disk and bearings is also taken into account which would cause more complicated matrix to be solved. Entering these parameters to our modeling would change the results completely that these differences are shown in the results. As said upper, to define transfer matrix to reach the natural frequencies of probed system, introducing some elements would be one of the requirements. For the boundary condition of these elements, bearings at the end of the shaft are modeled as equivalent spring and dampers for the discretized system. Also, continuous model is used for the shaft in the system. By above considerations and using transfer matrix, exact results are taken from the calculations. Results Show that, by increasing thickness of the bearing the amplitude of vibration would decrease, but obviously the stiffness of the shaft and the natural frequencies of the system would accompany growth. Consequently it is easily understood that ignoring the influences of bearing and disk thicknesses would results not real answers.

Keywords: Rotor System, Disk and Bearing Thickness, Transfer Matrix, Amplitude.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548
319 A Study on the Effectiveness of Alternative Commercial Ventilation Inlets That Improve Energy Efficiency of Building Ventilation Systems

Authors: Brian Considine, Aonghus McNabola, John Gallagher, Prashant Kumar

Abstract:

Passive air pollution control devices known as aspiration efficiency reducers (AER) have been developed using aspiration efficiency (AE) concepts. Their purpose is to reduce the concentration of particulate matter (PM) drawn into a building air handling unit (AHU) through alterations in the inlet design improving energy consumption. In this paper an examination is conducted into the effect of installing a deflector system around an AER-AHU inlet for both a forward and rear-facing orientations relative to the wind. The results of the study found that these deflectors are an effective passive control method for reducing AE at various ambient wind speeds over a range of microparticles of varying diameter. The deflector system was found to induce a large wake zone at low ambient wind speeds for a rear-facing AER-AHU, resulting in significantly lower AE in comparison to without. As the wind speed increased, both contained a wake zone but have much lower concentration gradients with the deflectors. For the forward-facing models, the deflector system at low ambient wind speed was preferred at higher Stokes numbers but there was negligible difference as the Stokes number decreased. Similarly, there was no significant difference at higher wind speeds across the Stokes number range tested. The results demonstrate that a deflector system is a viable passive control method for the reduction of ventilation energy consumption.

Keywords: Aspiration efficiency, energy, particulate matter, ventilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477
318 Hybrid Methods for Optimisation of Weights in Spatial Multi-Criteria Evaluation Decision for Fire Risk and Hazard

Authors: I. Yakubu, D. Mireku-Gyimah, D. Asafo-Adjei

Abstract:

The challenge for everyone involved in preserving the ecosystem is to find creative ways to protect and restore the remaining ecosystems while accommodating and enhancing the country social and economic well-being. Frequent fires of anthropogenic origin have been affecting the ecosystems in many countries adversely. Hence adopting ways of decision making such as Multicriteria Decision Making (MCDM) is appropriate since it will enhance the evaluation and analysis of fire risk and hazard of the ecosystem. In this paper, fire risk and hazard data from the West Gonja area of Ghana were used in some of the methods (Analytical Hierarchy Process, Compromise Programming, and Grey Relational Analysis (GRA) for MCDM evaluation and analysis to determine the optimal weight method for fire risk and hazard. Ranking of the land cover types was carried out using; Fire Hazard, Fire Fighting Capacity and Response Risk Criteria. Pairwise comparison under Analytic Hierarchy Process (AHP) was used to determine the weight of the various criteria. Weights for sub-criteria were also obtained by the pairwise comparison method. The results were optimised using GRA and Compromise Programming (CP). The results from each method, hybrid GRA and CP, were compared and it was established that all methods were satisfactory in terms of optimisation of weight. The most optimal method for spatial multicriteria evaluation was the hybrid GRA method. Thus, a hybrid AHP and GRA method is more effective method for ranking alternatives in MCDM than the hybrid AHP and CP method.

Keywords: Compromise programming, grey relational analysis, spatial multi-criteria, weight optimisation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 656
317 Evaluation of Easy-to-Use Energy Building Design Tools for Solar Access Analysis in Urban Contexts: Comparison of Friendly Simulation Design Tools for Architectural Practice in the Early Design Stage

Authors: M. Iommi, G. Losco

Abstract:

Current building sector is focused on reduction of energy requirements, on renewable energy generation and on regeneration of existing urban areas. These targets need to be solved with a systemic approach, considering several aspects simultaneously such as climate conditions, lighting conditions, solar radiation, PV potential, etc. The solar access analysis is an already known method to analyze the solar potentials, but in current years, simulation tools have provided more effective opportunities to perform this type of analysis, in particular in the early design stage. Nowadays, the study of the solar access is related to the easiness of the use of simulation tools, in rapid and easy way, during the design process. This study presents a comparison of three simulation tools, from the point of view of the user, with the aim to highlight differences in the easy-to-use of these tools. Using a real urban context as case study, three tools; Ecotect, Townscope and Heliodon, are tested, performing models and simulations and examining the capabilities and output results of solar access analysis. The evaluation of the ease-to-use of these tools is based on some detected parameters and features, such as the types of simulation, requirements of input data, types of results, etc. As a result, a framework is provided in which features and capabilities of each tool are shown. This framework shows the differences among these tools about functions, features and capabilities. The aim of this study is to support users and to improve the integration of simulation tools for solar access with the design process.

Keywords: Solar access analysis, energy building design tools, urban planning, solar potential.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068
316 Evaluation of Electro-Flocculation for Biomass Production of Marine Microalgae Phaodactylum tricornutum

Authors: Luciana C. Ramos, Leandro J. Sousa, Antônio Ferreira da Silva, Valéria Gomes Oliveira Falcão, Suzana T. Cunha Lima

Abstract:

The commercial production of biodiesel using microalgae demands a high-energy input for harvesting biomass, making production economically unfeasible. Methods currently used involve mechanical, chemical, and biological procedures. In this work, a flocculation system is presented as a cost and energy effective process to increase biomass production of Phaeodactylum tricornutum. This diatom is the only species of the genus that present fast growth and lipid accumulation ability that are of great interest for biofuel production. The algae, selected from the Bank of Microalgae, Institute of Biology, Federal University of Bahia (Brazil), have been bred in tubular reactor with photoperiod of 12 h (clear/dark), providing luminance of about 35 μmol photons m-2s-1, and temperature of 22 °C. The medium used for growing cells was the Conway medium, with addition of silica. The seaweed growth curve was accompanied by cell count in Neubauer camera and by optical density in spectrophotometer, at 680 nm. The precipitation occurred at the end of the stationary phase of growth, 21 days after inoculation, using two methods: centrifugation at 5000 rpm for 5 min, and electro-flocculation at 19 EPD and 95 W. After precipitation, cells were frozen at -20 °C and, subsequently, lyophilized. Biomass obtained by electro-flocculation was approximately four times greater than the one achieved by centrifugation. The benefits of this method are that no addition of chemical flocculants is necessary and similar cultivation conditions can be used for the biodiesel production and pharmacological purposes. The results may contribute to improve biodiesel production costs using marine microalgae.

Keywords: Biomass, diatom, flocculation, microalgae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
315 Preventive Interventions for Central Venous Catheter Infections in Intensive Care Units: A Systematic Literature Review

Authors: Jakob Renko, Deja Praprotnik, Kristina Martinovič, Igor Karnjuš

Abstract:

Catheter-related bloodstream infections are a major burden for healthcare and patients. Although infections of this type cannot be completely avoided, they can be reduced by taking preventive measures. The aim of this study is to review and analyze the existing literature on preventive interventions to prevent central venous catheters (CVC) infections. A systematic literature review was carried out. The international databases CINAHL, Medline, PubMed, and Web of Science were searched using the search strategy: "catheter-related infections" AND "intensive care units" AND "prevention" AND "central venous catheter." Articles that met the inclusion and exclusion criteria were included in the study. The literature search flow is illustrated by the PRISMA diagram. The descriptive research method was used to analyze the data. Out of 554 search results, 22 surveys were included in the final analysis. We identified seven relevant preventive measures to prevent CVC infections: washing the whole body with chlorhexidine gluconate (CHG) solution, disinfecting the CVC entry site with CHG solution, use of CHG or silver dressings, alcohol protective caps, CVC care education, selecting appropriate catheter and multicomponent care bundles. Both single interventions and multicomponent care bundles have been shown to be currently effective measures to prevent CVC infections in adult patients in the ICU. None of the measures identified stood out in terms of their effectiveness. Prevention work to reduce CVC infections in the ICU is a complex process that requires the simultaneous consideration of several factors.

Keywords: Central venous access, critically ill patients, hospital-acquired complications, prevention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 260
314 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 871
313 Crash Severity Modeling in Urban Highways Using Backward Regression Method

Authors: F. Rezaie Moghaddam, T. Rezaie Moghaddam, M. Pasbani Khiavi, M. Ali Ghorbani

Abstract:

Identifying and classifying intersections according to severity is very important for implementation of safety related counter measures and effective models are needed to compare and assess the severity. Highway safety organizations have considered intersection safety among their priorities. In spite of significant advances in highways safety, the large numbers of crashes with high severities still occur in the highways. Investigation of influential factors on crashes enables engineers to carry out calculations in order to reduce crash severity. Previous studies lacked a model capable of simultaneous illustration of the influence of human factors, road, vehicle, weather conditions and traffic features including traffic volume and flow speed on the crash severity. Thus, this paper is aimed at developing the models to illustrate the simultaneous influence of these variables on the crash severity in urban highways. The models represented in this study have been developed using binary Logit Models. SPSS software has been used to calibrate the models. It must be mentioned that backward regression method in SPSS was used to identify the significant variables in the model. Consider to obtained results it can be concluded that the main factor in increasing of crash severity in urban highways are driver age, movement with reverse gear, technical defect of the vehicle, vehicle collision with motorcycle and bicycle, bridge, frontal impact collisions, frontal-lateral collisions and multi-vehicle crashes in urban highways which always increase the crash severity in urban highways.

Keywords: Backward regression, crash severity, speed, urbanhighways.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921
312 Improvement of Overall Equipment Effectiveness through Total Productive Maintenance

Authors: S. Fore, L. Zuze

Abstract:

Frequent machine breakdowns, low plant availability and increased overtime are a great threat to a manufacturing plant as they increase operating costs of an industry. The main aim of this study was to improve Overall Equipment Effectiveness (OEE) at a manufacturing company through the implementation of innovative maintenance strategies. A case study approach was used. The paper focuses on improving the maintenance in a manufacturing set up using an innovative maintenance regime mix to improve overall equipment effectiveness. Interviews, reviewing documentation and historical records, direct and participatory observation were used as data collection methods during the research. Usually production is based on the total kilowatt of motors produced per day. The target kilowatt at 91% availability is 75 Kilowatts a day. Reduced demand and lack of raw materials particularly imported items are adversely affecting the manufacturing operations. The company had to reset its targets from the usual figure of 250 Kilowatt per day to mere 75 per day due to lower availability of machines as result of breakdowns as well as lack of raw materials. The price reductions and uncertainties as well as general machine breakdowns further lowered production. Some recommendations were given. For instance, employee empowerment in the company will enhance responsibility and authority to improve and totally eliminate the six big losses. If the maintenance department is to realise its proper function in a progressive, innovative industrial society, then its personnel must be continuously trained to meet current needs as well as future requirements. To make the maintenance planning system effective, it is essential to keep track of all the corrective maintenance jobs and preventive maintenance inspections. For large processing plants these cannot be handled manually. It was therefore recommended that the company implement (Computerised Maintenance Management System) CMMS.

Keywords: Maintenance, Manufacturing, Overall Equipment Effectiveness

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3988
311 Measuring Principal and Teacher Cultural Competency: A Needs Assessment of Three Proximate PreK-5 Schools

Authors: Teresa Caswell

Abstract:

Throughout the United States and within a myriad of demographic contexts, students of color experience the results of systemic inequities as an academic outcome. These disparities continue despite the increased resources provided to students and ongoing instruction-focused professional learning received by teachers. We postulated that lower levels of educator cultural competency are an underlying factor of why resource and instructional interventions are less effective than desired. Before implementing any type of intervention, however, cultural competency needed to be confirmed as a factor in schools demonstrating academic disparities between racial subgroups. A needs assessment was designed to measure levels of individual beliefs, including cultural competency, in both principals and teachers at three neighboring schools verified to have academic disparities. The resulting mixed method study utilized the Optimal Theory Applied to Identity Development (OTAID) model to measure cultural competency quantitatively, through self-identity inventory survey items, with teachers and qualitatively, through one-on-one interviews, with each school’s principal. A joint display was utilized to see combined data within and across school contexts. Each school was confirmed to have misalignments between principal and teacher levels of cultural competency beliefs while also indicating that a number of participants in the self-identity inventory survey may have intentionally skipped items referencing the term oppression. Additional use of the OTAID model and self-identity inventory in future research and across contexts is needed to determine transferability and dependability as cultural competency measures.

Keywords: Cultural competency, identity development, mixed method analysis, needs assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 176
310 A CFD Study of Turbulent Convective Heat Transfer Enhancement in Circular Pipeflow

Authors: Perumal Kumar, Rajamohan Ganesan

Abstract:

Addition of milli or micro sized particles to the heat transfer fluid is one of the many techniques employed for improving heat transfer rate. Though this looks simple, this method has practical problems such as high pressure loss, clogging and erosion of the material of construction. These problems can be overcome by using nanofluids, which is a dispersion of nanosized particles in a base fluid. Nanoparticles increase the thermal conductivity of the base fluid manifold which in turn increases the heat transfer rate. Nanoparticles also increase the viscosity of the basefluid resulting in higher pressure drop for the nanofluid compared to the base fluid. So it is imperative that the Reynolds number (Re) and the volume fraction have to be optimum for better thermal hydraulic effectiveness. In this work, the heat transfer enhancement using aluminium oxide nanofluid using low and high volume fraction nanofluids in turbulent pipe flow with constant wall temperature has been studied by computational fluid dynamic modeling of the nanofluid flow adopting the single phase approach. Nanofluid, up till a volume fraction of 1% is found to be an effective heat transfer enhancement technique. The Nusselt number (Nu) and friction factor predictions for the low volume fractions (i.e. 0.02%, 0.1 and 0.5%) agree very well with the experimental values of Sundar and Sharma (2010). While, predictions for the high volume fraction nanofluids (i.e. 1%, 4% and 6%) are found to have reasonable agreement with both experimental and numerical results available in the literature. So the computationally inexpensive single phase approach can be used for heat transfer and pressure drop prediction of new nanofluids.

Keywords: Heat transfer intensification, nanofluid, CFD, friction factor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2875
309 Evaluation of Market Limitations in the Case of Ecosystem Services

Authors: Giani Gradinaru

Abstract:

Biodiversity crisis is one of the many crises that started at the turn of the millennia. Concrete form of expression is still disputed, but there is a relatively high consensus regarding the high rate of degradation and the urgent need for action. The strategy of action outlines a strong economic component, together with the recognition of market mechanisms as the most effective policies to protect biodiversity. In this context, biodiversity and ecosystem services are natural assets that play a key role in economic strategies and technological development to promote development and prosperity. Developing and strengthening policies for transition to an economy based on efficient use of resources is the way forward. To emphasize the co-viability specific to the connection economyecosystem services, scientific approach aimed on one hand how to implement policies for nature conservation and on the other hand, the concepts underlying the economic expression of ecosystem services- value, in the context of current technology. Following the analysis of business opportunities associated with changes in ecosystem services was concluded that development of market mechanisms for nature conservation is a trend that is increasingly stronger individualized within recent years. Although there are still many controversial issues that have already given rise to an obvious bias, international organizations and national governments have initiated and implemented in cooperation or independently such mechanisms. Consequently, they created the conditions for convergence between private interests and social interests of nature conservation, so there are opportunities for ongoing business development which leads, among other things, the positive effects on biodiversity. Finally, points out that markets fail to quantify the value of most ecosystem services. Existing price signals reflect at best, only a proportion of the total amount corresponding provision of food, water or fuel.

Keywords: ecosystem services, economic evaluation, nature conservation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
308 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence

Authors: K. N. Kiran, S. Anish

Abstract:

It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.

Keywords: Boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037
307 Patterns of Malignant and Benign Breast Lesions in Hail Region: A Retrospective Study at King Khalid Hospital

Authors: Laila Seada, Ashraf Ibrahim, Amjad Al Shammari

Abstract:

Background and Objectives: Breast carcinoma is the most common cancer of females in Hail region, accounting for 31% of all diagnosed cancer cases followed by thyroid carcinoma (25%) and colorectal carcinoma (13%). Methods: In the present retrospective study, all cases of breast lesions received at the histopathology department in King Khalid Hospital, Hail, during the period from May 2011 to April 2016 have been retrieved from department files. For all cases, a trucut biopsy, lumpectomy, or modified radical mastectomy was available for histopathologic diagnosis, while 105/140 (75%) had, as well, preoperative fine needle aspirates (FNA). Results: 49 cases out of 140 (35%) breast lesions were carcinomas: 44/49 (89.75%) was invasive ductal, 2/49(4.1%) invasive lobular carcinomas, 1/49(2.05%) intracystic low grade papillary carcinoma and 2/49 (4.1%) ductal carcinoma in situ (DCIS). Mean age for malignant cases was 45.06 (+/-10.58): 32.6% were below the age of 40 and 30.6 below 50 years, 18.3% below 60 and 16.3% below 70 years. For the benign group, mean age was 32.52 (+/10.5) years. Benign lesions were in order of frequency: 34 fibroadenomas, 14 fibrocystic disease, 12 chronic mastitis, five granulomatous mastitis, three intraductal papillomas, and three benign phyllodes tumor. Tubular adenoma, lipoma, skin nevus, pilomatrixoma, and breast reduction specimens constituted the remaining specimens. Conclusion: Breast lesions are common in our series and invasive carcinoma accounts for more than 1/3rd of the lumps, with 63.2% incidence in pre-menopausal ladies, below the age of 50 years. FNA as a non-invasive procedure, proved to be an effective tool in diagnosing both benign and malignant/suspicious breast lumps and should continue to be used as a first assessment line of palpable breast masses.

Keywords: Age incidence, breast carcinoma, fine needle aspiration, Hail Region.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 937