Search results for: Time Dependent
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7060

Search results for: Time Dependent

340 Reduction of Plutonium Production in Heavy Water Research Reactor: A Feasibility Study through Neutronic Analysis Using MCNPX2.6 and CINDER90 Codes

Authors: H. Shamoradifar, B. Teimuri, P. Parvaresh, S. Mohammadi

Abstract:

One of the main characteristics of Heavy Water Moderated Reactors is their high production of plutonium. This article demonstrates the possibility of reduction of plutonium and other actinides in Heavy Water Research Reactor. Among the many ways for reducing plutonium production in a heavy water reactor, in this research, changing the fuel from natural Uranium fuel to Thorium-Uranium mixed fuel was focused. The main fissile nucleus in Thorium-Uranium fuels is U-233 which would be produced after neutron absorption by Th-232, so the Thorium-Uranium fuels have some known advantages compared to the Uranium fuels. Due to this fact, four Thorium-Uranium fuels with different compositions ratios were chosen in our simulations; a) 10% UO2-90% THO2 (enriched= 20%); b) 15% UO2-85% THO2 (enriched= 10%); c) 30% UO2-70% THO2 (enriched= 5%); d) 35% UO2-65% THO2 (enriched= 3.7%). The natural Uranium Oxide (UO2) is considered as the reference fuel, in other words all of the calculated data are compared with the related data from Uranium fuel. Neutronic parameters were calculated and used as the comparison parameters. All calculations were performed by Monte Carol (MCNPX2.6) steady state reaction rate calculation linked to a deterministic depletion calculation (CINDER90). The obtained computational data showed that Thorium-Uranium fuels with four different fissile compositions ratios can satisfy the safety and operating requirements for Heavy Water Research Reactor. Furthermore, Thorium-Uranium fuels have a very good proliferation resistance and consume less fissile material than uranium fuels at the same reactor operation time. Using mixed Thorium-Uranium fuels reduced the long-lived α emitter, high radiotoxic wastes and the radio toxicity level of spent fuel.

Keywords: Burn-up, heavy water reactor, minor actinides, Monte Carlo, proliferation resistance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1004
339 Substitution of Phosphate with Liquid Smoke as a Binder on the Quality of Chicken Nugget

Authors: E. Abustam, M. Yusuf, M. I. Said

Abstract:

One of functional properties of the meat is decrease of water holding capacity (WHC) during rigor mortis. At the time of pre-rigor, WHC is higher than post-rigor. The decline of WHC has implication to the other functional properties such as decreased cooking lost and yields resulting in lower elasticity and compactness of processed meat product. In many cases, the addition of phosphate in the meat will increase the functional properties of the meat such as WHC. Furthermore, liquid smoke has also been known in increasing the WHC of fresh meat. For food safety reasons, liquid smoke in the present study was used as a substitute to phosphate in production of chicken nuggets. This study aimed to know the effect of substitution of phosphate with liquid smoke on the quality of nuggets made from post-rigor chicken thigh and breast. The study was arranged using completely randomized design of factorial pattern 2x3 with three replications. Factor 1 was thigh and breast parts of the chicken, and factor 2 was different levels of liquid smoke in substitution to phosphate (0%, 50%, and 100%). The thigh and breast post-rigor broiler aged 40 days were used as the main raw materials in making nuggets. Auxiliary materials instead of meat were phosphate, liquid smoke at concentration of 10%, tapioca flour, salt, eggs and ice. Variables measured were flexibility, shear force value, cooking loss, elasticity level, and preferences. The results of this study showed that the substitution of phosphate with 100% liquid smoke resulting high quality nuggets. Likewise, the breast part of the meat showed higher quality nuggets than thigh part. This is indicated by high elasticity, low shear force value, low cooking loss, and a high level of preference of the nuggets. It can be concluded that liquid smoke can be used as a binder in making nuggets of chicken post-rigor.

Keywords: Liquid smoke, nugget quality, phosphate, post-rigor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1100
338 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study

Authors: Faris Tarlochan, Siva Mahesh Tangutooru

Abstract:

Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 μm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.

Keywords: Lateral geniculate nucleus, visual cortex, finite element, glaucoma, neuroprostheses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023
337 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4523
336 Design and Development of a Mechanical Force Gauge for the Square Watermelon Mold

Authors: M. Malek Yarand, H. Saebi Monfared

Abstract:

This study aimed at designing and developing a mechanical force gauge for the square watermelon mold for the first time. It also tried to introduce the square watermelon characteristics and its production limitations. The mechanical force gauge performance and the product itself were also described. There are three main designable gauge models: a. hydraulic gauge, b. strain gauge, and c. mechanical gauge. The advantage of the hydraulic model is that it instantly displays the pressure and thus the force exerted by the melon. However, considering the inability to measure forces at all directions, complicated development, high cost, possible hydraulic fluid leak into the fruit chamber and the possible influence of increased ambient temperature on the fluid pressure, the development of this gauge was overruled. The second choice was to calculate pressure using the direct force a strain gauge. The main advantage of these strain gauges over spring types is their high precision in measurements; but with regard to the lack of conformity of strain gauge working range with water melon growth, calculations were faced with problems. Finally the mechanical pressure gauge has advantages, including the ability to measured forces and pressures on the mold surface during melon growth; the ability to display the peak forces; the ability to produce melon growth graph thanks to its continuous force measurements; the conformity of its manufacturing materials with the required physical conditions of melon growth; high air conditioning capability; the ability to permit sunlight reaches the melon rind (no yellowish skin and quality loss); fast and straightforward calibration; no damages to the product during assembling and disassembling; visual check capability of the product within the mold; applicable to all growth environments (field, greenhouses, etc.); simple process; low costs and so forth.

Keywords: Mechanical Force Gauge, Mold, Reshaped Fruit, Square Watermelon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3123
335 Distribution of Macrobenthic Polychaete Families in Relation to Environmental Parameters in North West Penang, Malaysia

Authors: Mohammad Gholizadeh, Khairun Yahya, Anita Talib, Omar Ahmad

Abstract:

The distribution of macrobenthic polychaetes along the coastal waters of Penang National Park was surveyed to estimate the effect of various environmental parameters at three stations (200m, 600m and 1200m) from the shoreline, during six sampling months, from June 2010 to April 2011.The use of polychaetes in descriptive ecology is surveyed in the light of a recent investigation particularly concerning the soft bottom biota environments. Polychaetes, often connected in the former to the notion of opportunistic species able to proliferate after an enhancement in organic matter, had performed a momentous role particularly with regard to effected soft-bottom habitats. The objective of this survey was to investigate different environment stress over soft bottom polychaete community along Teluk Ketapang and Pantai Acheh (Penang National Park) over a year period. Variations in the polychaete community were evaluated using univariate and multivariate methods. The results of PCA analysis displayed a positive relation between macrobenthic community structures and environmental parameters such as sediment particle size and organic matter in the coastal water. A total of 604 individuals were examined which was grouped into 23 families. Family Nereidae was the most abundant (22.68%), followed by Spionidae (22.02%), Hesionidae (12.58%), Nephtylidae (9.27%) and Orbiniidae (8.61%). It is noticeable that good results can only be obtained on the basis of good taxonomic resolution. We proposed that, in monitoring surveys, operative time could be optimized not only by working at a highertaxonomic level on the entire macrobenthic data set, but by also choosing an especially indicative group and working at lower taxonomic and good level.

Keywords: Polychaete families, environment parameters, Bioindicators, Pantai Acheh, Teluk Ketapang.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997
334 Modeling the Fischer-Tropsch Reaction In a Slurry Bubble Column Reactor

Authors: F. Gholami, M. Torabi Angaji, Z. Gholami

Abstract:

Fischer-Tropsch synthesis is one of the most important catalytic reactions that convert the synthetic gas to light and heavy hydrocarbons. One of the main issues is selecting the type of reactor. The slurry bubble reactor is suitable choice for Fischer- Tropsch synthesis because of its good qualification to transfer heat and mass, high durability of catalyst, low cost maintenance and repair. The more common catalysts for Fischer-Tropsch synthesis are Iron-based and Cobalt-based catalysts, the advantage of these catalysts on each other depends on which type of hydrocarbons we desire to produce. In this study, Fischer-Tropsch synthesis is modeled with Iron and Cobalt catalysts in a slurry bubble reactor considering mass and momentum balance and the hydrodynamic relations effect on the reactor behavior. Profiles of reactant conversion and reactant concentration in gas and liquid phases were determined as the functions of residence time in the reactor. The effects of temperature, pressure, liquid velocity, reactor diameter, catalyst diameter, gasliquid and liquid-solid mass transfer coefficients and kinetic coefficients on the reactant conversion have been studied. With 5% increase of liquid velocity (with Iron catalyst), H2 conversions increase about 6% and CO conversion increase about 4%, With 8% increase of liquid velocity (with Cobalt catalyst), H2 conversions increase about 26% and CO conversion increase about 4%. With 20% increase of gas-liquid mass transfer coefficient (with Iron catalyst), H2 conversions increase about 12% and CO conversion increase about 10% and with Cobalt catalyst H2 conversions increase about 10% and CO conversion increase about 6%. Results show that the process is sensitive to gas-liquid mass transfer coefficient and optimum condition operation occurs in maximum possible liquid velocity. This velocity must be more than minimum fluidization velocity and less than terminal velocity in such a way that avoid catalysts particles from leaving the fluidized bed.

Keywords: Modeling, Fischer-Tropsch Synthesis, Slurry Bubble Column Reactor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3018
333 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow

Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen

Abstract:

Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.

Keywords: Flux limiters, MILES, OpenFOAM, turbulence structures, TVD schemes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1123
332 Nanomaterial Based Electrochemical Sensors for Endocrine Disrupting Compounds

Authors: Gaurav Bhanjana, Ganga Ram Chaudhary, Sandeep Kumar, Neeraj Dilbaghi

Abstract:

Main sources of endocrine disrupting compounds in the ecosystem are hormones, pesticides, phthalates, flame retardants, dioxins, personal-care products, coplanar polychlorinated biphenyls (PCBs), bisphenol A, and parabens. These endocrine disrupting compounds are responsible for learning disabilities, brain development problems, deformations of the body, cancer, reproductive abnormalities in females and decreased sperm count in human males. Although discharge of these chemical compounds into the environment cannot be stopped, yet their amount can be retarded through proper evaluation and detection techniques. The available techniques for determination of these endocrine disrupting compounds mainly include high performance liquid chromatography (HPLC), mass spectroscopy (MS) and gas chromatography-mass spectrometry (GC–MS). These techniques are accurate and reliable but have certain limitations like need of skilled personnel, time consuming, interference and requirement of pretreatment steps. Moreover, these techniques are laboratory bound and sample is required in large amount for analysis. In view of above facts, new methods for detection of endocrine disrupting compounds should be devised that promise high specificity, ultra sensitivity, cost effective, efficient and easy-to-operate procedure. Nowadays, electrochemical sensors/biosensors modified with nanomaterials are gaining high attention among researchers. Bioelement present in this system makes the developed sensors selective towards analyte of interest. Nanomaterials provide large surface area, high electron communication feature, enhanced catalytic activity and possibilities of chemical modifications. In most of the cases, nanomaterials also serve as an electron mediator or electrocatalyst for some analytes.

Keywords: Sensors, endocrine disruptors, nanoparticles, electrochemical, microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
331 Preparation, Characterisation, and Measurement of the in vitro Cytotoxicity of Mesoporous Silica Nanoparticles Loaded with Cytotoxic Pt(II) Oxadiazoline Complexes

Authors: G. Wagner, R. Herrmann

Abstract:

Cytotoxic platinum compounds play a major role in the chemotherapy of a large number of human cancers. However, due to the severe side effects for the patient and other problems associated with their use, there is a need for the development of more efficient drugs and new methods for their selective delivery to the tumours. One way to achieve the latter could be in the use of nanoparticular substrates that can adsorb or chemically bind the drug. In the cell, the drug is supposed to be slowly released, either by physical desorption or by dissolution of the particle framework. Ideally, the cytotoxic properties of the platinum drug unfold only then, in the cancer cell and over a longer period of time due to the gradual release. In this paper, we report on our first steps in this direction. The binding properties of a series of cytotoxic Pt(II) oxadiazoline compounds to mesoporous silica particles has been studied by NMR and UV/vis spectroscopy. High loadings were achieved when the Pt(II) compound was relatively polar, and has been dissolved in a relatively nonpolar solvent before the silica was added. Typically, 6-10 hours were required for complete equilibration, suggesting the adsorption did not only occur to the outer surface but also to the interior of the pores. The untreated and Pt(II) loaded particles were characterised by C, H, N combustion analysis, BET/BJH nitrogen sorption, electron microscopy (REM and TEM) and EDX. With the latter methods we were able to demonstrate the homogenous distribution of the Pt(II) compound on and in the silica particles, and no Pt(II) bulk precipitate had formed. The in vitro cytotoxicity in a human cancer cell line (HeLa) has been determined for one of the new platinum compounds adsorbed to mesoporous silica particles of different size, and compared with the corresponding compound in solution. The IC50 data are similar in all cases, suggesting that the release of the Pt(II) compound was relatively fast and possibly occurred before the particles reached the cells. Overall, the platinum drug is chemically stable on silica and retained its activity upon prolonged storage.

Keywords: Cytotoxicity, mesoporous silica, nanoparticles platinum compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1642
330 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 419
329 Making Food Science Education and Research Activities More Attractive for University Students and Food Enterprises by Utilizing Open Innovative Space Approach

Authors: A-M. Saarela

Abstract:

At the Savonia University of Applied Sciences (UAS), curriculum and studies have been improved by applying an Open Innovation Space approach (OIS). It is based on multidisciplinary action learning. The key elements of OIS-ideology are work-life orientation, and student-centric communal learning. In this approach, every participant can learn from each other and innovations will be created. In this social innovation educational approach, all practices are carried out in close collaboration with enterprises in real-life settings, not in classrooms. As an example, in this paper, Savonia UAS’s Future Food RDI hub (FF) shows how OIS practices are implemented by providing food product development and consumer research services for enterprises in close collaboration with academicians, students and consumers. In particular one example of OIS experimentation in the field is provided by a consumer research carried out utilizing verbal analysis protocol combined with audiovisual observation (VAP-WAVO). In this case, all co-learners were acting together in supermarket settings to collect the relevant data for a product development and the marketing department of a company. The company benefitted from the results obtained, students were more satisfied with their studies, educators and academicians were able to obtain good evidence for further collaboration as well as renewing curriculum contents based on the requirements of working life. In addition, society will benefit over time as young university adults find careers more easily through their OIS related food science studies. Also this knowledge interaction model re-news education practices and brings working-life closer to educational research institutes.

Keywords: Collaboration, education, food science, industry, knowledge transfer, RDI, student.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
328 Operational Analysis of Urban Intelligent Transportation System and Strategies for Future Development - Taking Calling Service of Taxi in Wuhan as an Example

Authors: Wang Xu, Yao Yangyang, Lin Ying, Wang Zhenzhen

Abstract:

Intelligent Transportation System integrates various modern advanced technologies into the ground transportation system, and it will be the goal of urban transport system in the future because of its comprehensive effects. However, it also brings some problems, such as project performance assessment, fairness of benefiting groups, fund management, which are directly related to its operation and implementation. Wuhan has difficulties in organizing transportation because of its nature feature (river and lake), therefore, calling Service of Taxi plays an important role in transportation. This paper researches on calling Service of Taxi in Wuhan, based on quantitative and qualitative analysis. It analyzes its operations management systematically, including business model, finance, usage analysis and users evaluation. As for business model, it is that the government leads the operation at the initial stage, and the third part dominates the operation at the mature stage, which not only eases the pressure of the third part and benefits the spread of the calling service at the initial stage, but also alleviates financial pressure of government and improve the efficiency of the operation at the mature stage. As for finance, it draws that this service will bring heavy financial burden of equipments, but it will be alleviated in the future because of its spread. As for usage analysis, through data comparison, this service can bring some benefits for taxi drivers, and time and spatial distribution of usage have certain features. As for user evaluation, it analyzes using group and the reason why choosing it. At last, according to the analysis above, the paper puts forward the potentials, limitations, and future development strategies for it.

Keywords: Assessment, Calling service of taxi, Operations management, Strategies, Using groups.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242
327 Introductory Design Optimisation of a Machine Tool using a Virtual Machine Concept

Authors: Johan Wall, Johan Fredin, Anders Jönsson, Göran Broman

Abstract:

Designing modern machine tools is a complex task. A simulation tool to aid the design work, a virtual machine, has therefore been developed in earlier work. The virtual machine considers the interaction between the mechanics of the machine (including structural flexibility) and the control system. This paper exemplifies the usefulness of the virtual machine as a tool for product development. An optimisation study is conducted aiming at improving the existing design of a machine tool regarding weight and manufacturing accuracy at maintained manufacturing speed. The problem can be categorised as constrained multidisciplinary multiobjective multivariable optimisation. Parameters of the control and geometric quantities of the machine are used as design variables. This results in a mix of continuous and discrete variables and an optimisation approach using a genetic algorithm is therefore deployed. The accuracy objective is evaluated according to international standards. The complete systems model shows nondeterministic behaviour. A strategy to handle this based on statistical analysis is suggested. The weight of the main moving parts is reduced by more than 30 per cent and the manufacturing accuracy is improvement by more than 60 per cent compared to the original design, with no reduction in manufacturing speed. It is also shown that interaction effects exist between the mechanics and the control, i.e. this improvement would most likely not been possible with a conventional sequential design approach within the same time, cost and general resource frame. This indicates the potential of the virtual machine concept for contributing to improved efficiency of both complex products and the development process for such products. Companies incorporating such advanced simulation tools in their product development could thus improve its own competitiveness as well as contribute to improved resource efficiency of society at large.

Keywords: Machine tools, Mechatronics, Non-deterministic, Optimisation, Product development, Virtual machine

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966
326 Influence of a Company’s Dynamic Capabilities on Its Innovation Capabilities

Authors: Lovorka Galetic, Zeljko Vukelic

Abstract:

The advanced concepts of strategic and innovation management in the sphere of company dynamic and innovation capabilities, and achieving their mutual alignment and a synergy effect, are important elements in business today. This paper analyses the theory and empirically investigates the influence of a company’s dynamic capabilities on its innovation capabilities. A new multidimensional model of dynamic capabilities is presented, consisting of five factors appropriate to real time requirements, while innovation capabilities are considered pursuant to the official OECD and Eurostat standards. After examination of dynamic and innovation capabilities indicated their theoretical links, the empirical study testing the model and examining the influence of a company’s dynamic capabilities on its innovation capabilities showed significant results. In the study, a research model was posed to relate company dynamic and innovation capabilities. One side of the model features the variables that are the determinants of dynamic capabilities defined through their factors, while the other side features the determinants of innovation capabilities pursuant to the official standards. With regard to the research model, five hypotheses were set. The study was performed in late 2014 on a representative sample of large and very large Croatian enterprises with a minimum of 250 employees. The research instrument was a questionnaire administered to company top management. For both variables, the position of the company was tested in comparison to industry competitors, on a fivepoint scale. In order to test the hypotheses, correlation tests were performed to determine whether there is a correlation between each individual factor of company dynamic capabilities with the existence of its innovation capabilities, in line with the research model. The results indicate a strong correlation between a company’s possession of dynamic capabilities in terms of their factors, due to the new multi-dimensional model presented in this paper, with its possession of innovation capabilities. Based on the results, all five hypotheses were accepted. Ultimately, it was concluded that there is a strong association between the dynamic and innovation capabilities of a company. 

Keywords: Dynamic capabilities, innovation capabilities, competitive advantage, business results.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1427
325 QR Technology to Automate Health Condition Detection Payment System: A Case Study in Schools of the Kingdom of Saudi Arabia

Authors: Amjad Alsulami, Farah Albishri, Kholod Alzubidi, Lama Almehemadi, Salma Elhag

Abstract:

Food allergy is a common and rising problem among children. Many students have their first allergic reaction at school, one of these is anaphylaxis, which can be fatal. This study discovered that several schools' processes lacked safety regulations and information on how to handle allergy issues and chronic diseases like diabetes where students were not supervised or monitored during the cafeteria purchasing process. Academic institutions have no obvious prevention or effort when purchasing food containing allergens or negatively impacting the health status of students who suffer from chronic diseases. The stability of students' health must be maintained because it greatly affects their performance and educational achievement. To address this issue, this paper uses a business reengineering process to propose the automation of the whole food-purchasing process, which will aid in detecting and avoiding allergic occurrences and preventing any side effects from eating foods that are conflicting with students' health. This may be achieved by designing a smart card with an embedded QR code that reveals which foods cause an allergic reaction in a student. A survey was distributed to determine and examine how the cafeteria will handle allergic children and whether any management or policy is applied in the school. Also, the survey findings indicate that the integration of QR technology into the food purchasing process would improve health condition detection. The family supported that the suggested solution would be advantageous because it ensured their children avoided eating not allowed food. Moreover, by analyzing and simulating the as-is process and the suggested process, the results demonstrate that there is an improvement in quality and time.

Keywords: QR code, smart card, food allergies, Business Process reengineering, health condition detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 349
324 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: ANN, DWT, GLCM, KNN, ROI, artificial neural networks, discrete wavelet transform, gray-level co-occurrence matrix, k-nearest neighbor, region of interest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960
323 Cold Flow Investigation of Primary Zone Characteristics in Combustor Utilizing Axial Air Swirler

Authors: Yehia A. Eldrainy, Mohammad Nazri Mohd. Jaafar, Tholudin Mat Lazim

Abstract:

This paper presents a cold flow simulation study of a small gas turbine combustor performed using laboratory scale test rig. The main objective of this investigation is to obtain physical insight of the main vortex, responsible for the efficient mixing of fuel and air. Such models are necessary for predictions and optimization of real gas turbine combustors. Air swirler can control the combustor performance by assisting in the fuel-air mixing process and by producing recirculation region which can act as flame holders and influences residence time. Thus, proper selection of a swirler is needed to enhance combustor performance and to reduce NOx emissions. Three different axial air swirlers were used based on their vane angles i.e., 30°, 45°, and 60°. Three-dimensional, viscous, turbulent, isothermal flow characteristics of the combustor model operating at room temperature were simulated via Reynolds- Averaged Navier-Stokes (RANS) code. The model geometry has been created using solid model, and the meshing has been done using GAMBIT preprocessing package. Finally, the solution and analysis were carried out in a FLUENT solver. This serves to demonstrate the capability of the code for design and analysis of real combustor. The effects of swirlers and mass flow rate were examined. Details of the complex flow structure such as vortices and recirculation zones were obtained by the simulation model. The computational model predicts a major recirculation zone in the central region immediately downstream of the fuel nozzle and a second recirculation zone in the upstream corner of the combustion chamber. It is also shown that swirler angles changes have significant effects on the combustor flowfield as well as pressure losses.

Keywords: cold flow, numerical simulation, combustor;turbulence, axial swirler.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2203
322 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: Time-series, features engineering methods for forecasting, energy demand forecasting, Azure machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1288
321 Life Table and Reproductive Table Parameters of Scolothrips Longicornis (Thysanoptera: Thripidae) as a Predator of Two-Spotted Spider Mite, Tetranychus Turkestani (Acari: Tetranychidae)

Authors: Mehdi Gheibi, Shahram Hesami

Abstract:

Scolothrips longicornis Priesner is one of the important predators of tetranychid mites with a wide distribution throughout Iran. Life table and population growth parameters of S. longicornis feeding on two-spotted spider mite, Tetranychus turkestani Ugarov & Nikolski were investigated under laboratory condition (26±1ºC, 65±5% R.H. and 16L: 8D). To carry of these experiments, S. longicornis collections reared on cowpea infested with T. turkestani were prepared. The eggs with less than 24 hours old were selected and reared. The emerged larvae feeding directly on cowpea leaf discs which were infested with T. turkestani. Thirty females of S. longicornis with 24 hours age were selected and released on infested leaf discs. They replaced daily to a new leaf disc and the laying eggs have counted. The experiment continued till the last thrips had died. The result showed that the mean age mortality of the adult female thrips were between 21-25 days which is nearly equal life expectancy (ex) at the time of adult eclosion. Parameters related to reproductive table including gross reproductive rate, net reproductive rate, intrinsic rate of natural increase and finite rate of increase were 48.91, 37.63, 0.26 and 2.3, respectively. Mean age per female/day, mean fertile egg per female/day, gross hatch rate, mean net age fertility, mean net age fecundity, net fertility rate and net fecundity rate were 2.23, 1.76, 0.87, 13.87, 14.26, 69.1 and 78.5, respectively. Sex ratio of offspring also recorded daily. The highest sex ratio for females was 0.88 in first day of oviposition. The sex ratio decreased gradually and reached under 0.46 after the day 26 and the oviposition rate declined. Then it seems that maintenance of rearing culture of predatory thrips for mass rearing later than 26 days after egg-laying commence is not profitable.

Keywords: Tetranychus, Scolothrips, Demography, Life table, Reproductive table

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2222
320 Knowledge Management Strategies within a Corporate Environment of Papers

Authors: Daniel J. Glauber

Abstract:

Knowledge transfer between personnel could benefit an organization’s improved competitive advantage in the marketplace from a strategic approach to knowledge management. The lack of information sharing between personnel could create knowledge transfer gaps while restricting the decision-making processes. Knowledge transfer between personnel can potentially improve information sharing based on an implemented knowledge management strategy. An organization’s capacity to gain more knowledge is aligned with the organization’s prior or existing captured knowledge. This case study attempted to understand the overall influence of a KMS within the corporate environment and knowledge exchange between personnel. The significance of this study was to help understand how organizations can improve the Return on Investment (ROI) of a knowledge management strategy within a knowledge-centric organization. A qualitative descriptive case study was the research design selected for this study. The lack of information sharing between personnel may create knowledge transfer gaps while restricting the decision-making processes. Developing a knowledge management strategy acceptable at all levels of the organization requires cooperation in support of a common organizational goal. Working with management and executive members to develop a protocol where knowledge transfer becomes a standard practice in multiple tiers of the organization. The knowledge transfer process could be measurable when focusing on specific elements of the organizational process, including personnel transition to help reduce time required understanding the job. The organization studied in this research acknowledged the need for improved knowledge management activities within the organization to help organize, retain, and distribute information throughout the workforce. Data produced from the study indicate three main themes including information management, organizational culture, and knowledge sharing within the workforce by the participants. These themes indicate a possible connection between an organizations KMS, the organizations culture, knowledge sharing, and knowledge transfer.

Keywords: Knowledge management strategies, knowledge transfer, knowledge management, knowledge capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962
319 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc

Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez

Abstract:

The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.

Keywords: BLER, LTE, Network, Qualipoc, SNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 534
318 Microbial Assessment of Dairy Byproducts in Albania as a Basis for Consumer Safety

Authors: Klementina Puto, Ermelinda Nexhipi, Evi Llaka

Abstract:

Dairy by-products are a fairly good environment for microorganisms due to their composition for their growth. Microbial populations have a significant impact in the production of cheese, butter, yogurt, etc. in terms of their organoleptic quality and at the same time some also cause their breakdown. In this paper, the microbiological contamination of soft cheese, butter and yogurt produced in the country (domestic) and imported is assessed, as an indicator of hygiene with impact on public health. The study was extended during September 2018-June 2019 and was divided into three periods, September-December, January-March, and April-June. During this study, a total of 120 samples were analyzed, of which 60 samples of cheese and butter locally produced, and 60 samples of imported soft cheese and butter productions. The microbial indicators analyzed are Staphylococcus aureus and E. coli. Analyzes have been conducted at the Food Safety Laboratory (FSIV) in Tirana in accordance with EU Regulation 2073/2005. Sampling was performed according to the specific international standards for these products (ISO 6887 and ISO 8261). Sampling and transport of samples were done under sterile conditions. Also, coding of samples was done to preserve the anonymity of subjects. After the analysis, the country's soft cheese products compared to imports were more contaminated with S. aureus and E. coli. Meanwhile, the imported butter samples that were analyzed, resulted within norms compared to domestic ones. Based on the results, it was concluded that the microbial quality of samples of cheese, butter and yogurt analyzed remains a real problem for hygiene in Albania. The study will also serve business operators in Albania to improve their work to ensure good hygiene on the basis of the HACCP plan and to provide a guarantee of consumer health.

Keywords: Consumer, health, dairy, by-products, microbial.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 618
317 The Nuclear Energy Museum in Brazil: Creative Solutions to Transform Science Education into Meaningful Learning

Authors: Denise Levy, Helen J. Khoury

Abstract:

Nuclear technology is a controversial issue among a great share of the Brazilian population. Misinformation and common wrong beliefs confuse public’s perceptions and the scientific community is expected to offer a wider perspective on the benefits and risks resulting from ionizing radiation in everyday life. Attentive to the need of new approaches between science and society, the Nuclear Energy Museum, in northeast Brazil, is an initiative created to communicate the growing impact of the beneficial applications of nuclear technology in medicine, industry, agriculture and electric power generation. Providing accessible scientific information, the museum offers a rich learning environment, making use of different educational strategies, such as films, interactive panels and multimedia learning tools, which not only increase the enjoyment of visitors, but also maximize their learning potential. Developed according to modern active learning instructional strategies, multimedia materials are designed to present the increasingly role of nuclear science in modern life, transforming science education into a meaningful learning experience. In year 2016, nine different interactive computer-based activities were developed, presenting curiosities about ionizing radiation in different landmarks around the world, such as radiocarbon dating works in Egypt, nuclear power generation in France and X-radiography of famous paintings in Italy. Feedback surveys have reported a high level of visitors’ satisfaction, proving the high quality experience in learning nuclear science at the museum. The Nuclear Energy Museum is the first and, up to the present time, the only permanent museum in Brazil devoted entirely to nuclear science.

Keywords: Nuclear technology, multimedia learning tools, science museum, society and education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1223
316 Identification of Complex Sense-antisense Gene's Module on 17q11.2 Associated with Breast Cancer Aggressiveness and Patient's Survival

Authors: O. Grinchuk, E. Motakis, V. Kuznetsov

Abstract:

Sense-antisense gene pair (SAGP) is a pair of two oppositely transcribed genes sharing a common region on a chromosome. In the mammalian genomes, SAGPs can be organized in more complex sense-antisense gene architectures (CSAGA) in which at least one gene could share loci with two or more antisense partners. Many dozens of CSAGAs can be found in the human genome. However, CSAGAs have not been systematically identified and characterized in context of their role in human diseases including cancers. In this work we characterize the structural-functional properties of a cluster of 5 genes –TMEM97, IFT20, TNFAIP1, POLDIP2 and TMEM199, termed TNFAIP1 / POLDIP2 module. This cluster is organized as CSAGA in cytoband 17q11.2. Affymetrix U133A&B expression data of two large cohorts (410 atients, in total) of breast cancer patients and patient survival data were used. For the both studied cohorts, we demonstrate (i) strong and reproducible transcriptional co-regulatory patterns of genes of TNFAIP1/POLDIP2 module in breast cancer cell subtypes and (ii) significant associations of TNFAIP1/POLDIP2 CSAGA with amplification of the CSAGA region in breast cancer, (ii) cancer aggressiveness (e.g. genetic grades) and (iv) disease free patient-s survival. Moreover, gene pairs of this module demonstrate strong synergetic effect in the prognosis of time of breast cancer relapse. We suggest that TNFAIP1/ POLDIP2 cluster can be considered as a novel type of structural-functional gene modules in the human genome.

Keywords: Sense-antisense gene pair, complex genome architecture, TMEM97, IFT20, TNFAIP1, POLDIP2, TMEM199, 17q11.2, breast cancer, transcription regulation, survival analysis, prognosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
315 Probiotic Potential and Antimicrobial Activity of Enterococcus faecium Isolated from Chicken Caecal and Fecal Samples

Authors: Salma H. Abu Hafsa, A. Mendonca, B. Brehm-Stecher, A. A. Hassan, S. A. Ibrahim

Abstract:

Enterococci are important inhabitants of the animal intestine and are widely used in probiotic products. A probiotic strain is expected to possess several desirable properties in order to exert beneficial effects. Therefore, the objective of this study was to isolate, characterize and identify Enterococcus sp. from chicken cecal and fecal samples to determine potential probiotic properties. Enterococci were isolated from chicken ceca and feces of thirty three clinically healthy chickens from a local farm. In vitro studies were performed to assess antibacterial activity of the isolated LAB (using agar well diffusion and cell free supernatant broth technique against Salmonella enterica serotype Enteritidis), survival in acidic conditions, resistance to bile salts, and their survival during simulated gastric juice conditions at pH 2.5. Isolates were identified by biochemical carbohydrate fermentation patterns using an API 50 CHL kit and API ZYM kits and by sequenced 16S rDNA. An isolate belonging to E. faecium species exhibited inhibitory effect against S. enteritidis. This isolate producing a clear zone as large as 10.30 mm or greater and was able to grow in the coculture medium and at the same time, inhibited the growth S. enteritidis. In addition, E. faecium exhibited significant resistance under highly acidic conditions at pH 2.5 for 8 h and survived well in bile salt at 0.2% for 24 h and showing ability to survive in the presence of simulated gastric juice at pH 2.5. Based on these results, E. faecium isolate fulfills some of the criteria to be considered as a probiotic strain and therefore, could be used as a feed additive with good potential for controlling S. Enteritidis in chickens. However, in vivo studies are needed to determine the safety of the strain.

Keywords: Acid tolerance, antimicrobial activity, Enterococcus faecium, probiotic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2888
314 Meta Model Based EA for Complex Optimization

Authors: Maumita Bhattacharya

Abstract:

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency

Keywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066
313 Implementing Education 4.0 Trends in Language Learning

Authors: Luz Janeth Ospina M.

Abstract:

The fourth industrial revolution is changing the role of education substantially and, therefore, the role of instructors and learners at all levels. Education 4.0 is an imminent response to the needs of a globalized world where humans and technology are being aligned to enable endless possibilities, among them the need for students, as digital natives, to communicate effectively in at least one language besides their mother tongue, and also the requirement of developing theirs. This is an exploratory study in which a control group (N = 21), all of the students of Spanish as a foreign language at the university level, after taking a Spanish class, responded to an online questionnaire about the engagement, atmosphere, and environment in which their course was delivered. These aspects considered in the survey were relative to the instructor’s teaching style, including: (a) active, hands-on learning; (b) flexibility for in-class activities, easily switching between small group work, individual work, and whole-class discussion; and (c) integrating technology into the classroom. Strongly believing in these principles, the instructor deliberately taught the course in a SCALE-UP room, as it could facilitate such a positive and encouraging learning environment. These aspects are trends related to Education 4.0 and have become integral to the instructor’s pedagogical stance that calls for a constructive-affective role, instead of a transmissive one. As expected, with a learning environment that (a) fosters student engagement and (b) improves student outcomes, the subjects were highly engaged, which was partially due to the learning environment. An overwhelming majority (all but one) of students agreed or strongly agreed that the atmosphere and the environment were ideal. Outcomes of this study are relevant and indicate that it is about time for teachers to build up a meaningful correlation between humans and technology. We should see the trends of Education 4.0 not as a threat but as practices that should be in the hands of critical and creative instructors whose pedagogical stance responds to the needs of the learners in the 21st century.

Keywords: Active learning, education 4.0, higher education, pedagogical stance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 699
312 Seismic Fragility Assessment of Strongback Steel Braced Frames Subjected to Near-Field Earthquakes

Authors: Mohammadreza Salek Faramarzi, Touraj Taghikhany

Abstract:

In this paper, seismic fragility assessment of a recently developed hybrid structural system, known as the strongback system (SBS) is investigated. In this system, to mitigate the occurrence of the soft-story mechanism and improve the distribution of story drifts over the height of the structure, an elastic vertical truss is formed. The strengthened members of the braced span are designed to remain substantially elastic during levels of excitation where soft-story mechanisms are likely to occur and impose a nearly uniform story drift distribution. Due to the distinctive characteristics of near-field ground motions, it seems to be necessary to study the effect of these records on seismic performance of the SBS. To this end, a set of 56 near-field ground motion records suggested by FEMA P695 methodology is used. For fragility assessment, nonlinear dynamic analyses are carried out in OpenSEES based on the recommended procedure in HAZUS technical manual. Four damage states including slight, moderate, extensive, and complete damage (collapse) are considered. To evaluate each damage state, inter-story drift ratio and floor acceleration are implemented as engineering demand parameters. Further, to extend the evaluation of the collapse state of the system, a different collapse criterion suggested in FEMA P695 is applied. It is concluded that SBS can significantly increase the collapse capacity and consequently decrease the collapse risk of the structure during its life time. Comparing the observing mean annual frequency (MAF) of exceedance of each damage state against the allowable values presented in performance-based design methods, it is found that using the elastic vertical truss, improves the structural response effectively.

Keywords: Strongback System, Near-fault, Seismic fragility, Uncertainty, IDA, Probabilistic performance assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 573
311 Effect of Impact Angle on Erosive Abrasive Wear of Ductile and Brittle Materials

Authors: Ergin Kosa, Ali Göksenli

Abstract:

Erosion and abrasion are wear mechanisms reducing the lifetime of machine elements like valves, pump and pipe systems. Both wear mechanisms are acting at the same time, causing a “Synergy” effect, which leads to a rapid damage of the surface. Different parameters are effective on erosive abrasive wear rate. In this study effect of particle impact angle on wear rate and wear mechanism of ductile and brittle materials was investigated. A new slurry pot was designed for experimental investigation. As abrasive particle, silica sand was used. Particle size was ranking between 200- 500 μm. All tests were carried out in a sand-water mixture of 20% concentration for four hours. Impact velocities of the particles were 4.76 m/s. As ductile material steel St 37 with Vickers Hardness Number (VHN) of 245 and quenched St 37 with 510 VHN was used as brittle material. After wear tests, morphology of the eroded surfaces were investigated for better understanding of the wear mechanisms acting at different impact angles by using Scanning Electron Microscope. The results indicated that wear rate of ductile material was higher than brittle material. Maximum wear rate was observed by ductile material at a particle impact angle of 300 and decreased further by an increase in attack angle. Maximum wear rate by brittle materials was by impact angle of 450 and decreased further up to 900. Ploughing was the dominant wear mechanism by ductile material. Microcracks on the surface were detected by ductile materials, which are nucleation centers for crater formation. Number of craters decreased and depth of craters increased by ductile materials by attack angle higher than 300. Deformation wear mechanism was observed by brittle materials. Number and depth of pits decreased by brittle materials by impact angles higher than 450. At the end it is concluded that wear rate could not be directly related to impact angle of particles due to the different reaction of ductile and brittle materials.

Keywords: Erosive wear, particle impact angle, silica sand, wear rate, ductile-brittle material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3023