Search results for: Galerk infinite element method
691 Web-Content Analysis of the Major Spanish Tourist Destinations Evaluation by Russian Tourists
Authors: Natalia Polkanova, Sergey Kazakov
Abstract:
In the second decade of the XXI century the role of tourism destination attractiveness is becoming increasingly important for destination management. Competition in tourism market moves from ordinary service quality to provision of unforgettable emotional experience for tourists. The main purpose of the present study is to identify the perception of the tourism destinations based on the number of factors related to its tourist attractiveness. The content analysis method was used to analyze the on-line tourist feedback data immensely available in Social Media and in travel related sites. The collected data made it possible to procure the information which is necessary to understand the perceived attractiveness of the destinations and key destination appeal factors that are important for Russian leisure travelers. Results of the present study demonstrate key attractiveness factors or destination ‘properties’ that were unveiled as the most important for Russian leisure tourists. The study targeted five main Spanish tourism destinations that initially were determined by in-depth interview with a number of Russian nationals who had visited Spain at least once. The research results can be useful for Spanish Tourism Organization Representation office in Russia as well as for the other national tourism organizations in order to promote their respective destinations for Russian travelers focusing on main attractiveness factors identified in this study.
Keywords: Tourism destination, destination attractiveness, destination competitiveness, content analysis, unstructured image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2582690 Analysis of Thermoelectric Coolers as Energy Harvesters for Low Power Embedded Applications
Authors: Yannick Verbelen, Sam De Winne, Niek Blondeel, Ann Peeters, An Braeken, Abdellah Touhafi
Abstract:
The growing popularity of solid state thermoelectric devices in cooling applications has sparked an increasing diversity of thermoelectric coolers (TECs) on the market, commonly known as “Peltier modules”. They can also be used as generators, converting a temperature difference into electric power, and opportunities are plentiful to make use of these devices as thermoelectric generators (TEGs) to supply energy to low power, autonomous embedded electronic applications. Their adoption as energy harvesters in this new domain of usage is obstructed by the complex thermoelectric models commonly associated with TEGs. Low cost TECs for the consumer market lack the required parameters to use the models because they are not intended for this mode of operation, thereby urging an alternative method to obtain electric power estimations in specific operating conditions. The design of the test setup implemented in this paper is specifically targeted at benchmarking commercial, off-the-shelf TECs for use as energy harvesters in domestic environments: applications with limited temperature differences and space available. The usefulness is demonstrated by testing and comparing single and multi stage TECs with different sizes. The effect of a boost converter stage on the thermoelectric end-to-end efficiency is also discussed.Keywords: Thermoelectric cooler, TEC, complementary balanced energy harvesting, step-up converter, DC/DC converter, embedded systems, energy harvesting, thermal harvesting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1403689 Numerical Modelling of Dust Propagation in the Atmosphere of Tbilisi City in Case of Western Background Light Air
Authors: N. Gigauri, V. Kukhalashvili, A. Surmava, L. Intskirveli, L. Gverdtsiteli
Abstract:
Tbilisi, a large city of the South Caucasus, is a junction point connecting Asia and Europe, Russia and republics of the Asia Minor. Over the last years, its atmosphere has been experienced an increasing anthropogenic load. Numerical modeling method is used for study of Tbilisi atmospheric air pollution. By means of 3D non-linear non-steady numerical model a peculiarity of city atmosphere pollution is investigated during background western light air. Dust concentration spatial and time changes are determined. There are identified the zones of high, average and less pollution, dust accumulation areas, transfer directions etc. By numerical modeling, there is shown that the process of air pollution by the dust proceeds in four stages, and they depend on the intensity of motor traffic, the micro-relief of the city, and the location of city mains. In the interval of time 06:00-09:00 the intensive growth, 09:00-15:00 a constancy or weak decrease, 18:00-21:00 an increase, and from 21:00 to 06:00 a reduction of the dust concentrations take place. The highly polluted areas are located in the vicinity of the city center and at some peripherical territories of the city, where the maximum dust concentration at 9PM is equal to 2 maximum allowable concentrations. The similar investigations conducted in case of various meteorological situations will enable us to compile the map of background urban pollution and to elaborate practical measures for ambient air protection.
Keywords: Numerical modelling, source of pollution, dust propagation, western light air.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 489688 Tide Contribution in the Flood Event of Jeddah City: Mathematical Modelling and Different Field Measurements of the Groundwater Rise
Authors: Aïssa Rezzoug
Abstract:
This paper is aimed to bring new elements that demonstrate the tide caused the groundwater to rise in the shoreline band, on which the urban areas occurs, especially in the western coastal cities of the Kingdom of Saudi Arabia like Jeddah. The reason for the last events of Jeddah inundation was the groundwater rise in the city coupled at the same time to a strong precipitation event. This paper will illustrate the tide participation in increasing the groundwater level significantly. It shows that the reason for internal groundwater recharge within the urban area is not only the excess of the water supply coming from surrounding areas, due to the human activity, with lack of sufficient and efficient sewage system, but also due to tide effect. The research study follows a quantitative method to assess groundwater level rise risks through many in-situ measurements and mathematical modelling. The proposed approach highlights groundwater level, in the urban areas of the city on the shoreline band, reaching the high tide level without considering any input from precipitation. Despite the small tide in the Red Sea compared to other oceanic coasts, the groundwater level is considerably enhanced by the tide from the seaside and by the freshwater table from the landside of the city. In these conditions, the groundwater level becomes high in the city and prevents the soil to evacuate quickly enough the surface flow caused by the storm event, as it was observed in the last historical flood catastrophe of Jeddah in 2009.
Keywords: Flood, groundwater rise, Jeddah, tide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 499687 Investigation of Compressive Strength of Slag-Based Geopolymer Concrete Incorporated with Rice Husk Ash Using 12M Alkaline Activator
Authors: Festus A. Olutoge, Ahmed A. Akintunde, Anuoluwapo S. Kolade, Aaron A. Chadee, Jovanca Smith
Abstract:
Geopolymer concrete's (GPC) compressive strength was investigated. The GPC was incorporated with rice husk ash (RHA) and ground granulated blast furnace slag (GGBFS), which may have potential in the construction industry to replace Portland limestone cement (PLC) concrete. The sustainable construction binders used were GGBFS and RHA, and a solution of sodium hydroxide (NaOH) and sodium silicate gel (Na2SiO3) was used as the 12-molar alkaline activator. Five GPC mixes comprising fine aggregates, coarse aggregates, GGBS, and RHA, and the alkaline solution in the ratio 2: 2.5: 1: 0.5, respectively, were prepared to achieve grade 40 concrete, and PLC was substituted with GGBFS and RHA in the ratios of 0:100, 25:75, 50:50, 75:25, and 100:0. A control mix was also prepared which comprised of 100% water and 100% PLC as the cementitious material. The GPC mixes were thermally cured at 60-80 ºC in an oven for approximately 24 h. After curing for 7 and 28 days, the compressive strength test results of the hardened GPC samples showed that GPC-Mix #3, comprising 50% GGBFS and 50% RHA, was the most efficient geopolymer mix. The mix had compressive strengths of 35.71 MPa and 47.26 MPa, 19.87% and 8.69% higher than the PLC concrete samples, which had 29.79 MPa and 43.48 MPa after 7 and 28 days, respectively. Therefore, GPC containing GGBFS incorporated with RHA is an efficient method of decreasing the use of PLC in conventional concrete production and reducing the high amounts of CO2 emitted into the atmosphere in the construction industry.
Keywords: Alkaline solution, cementitious material, geopolymer concrete, ground granulated blast furnace slag, rice husk ash.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190686 Spacecraft Neural Network Control System Design using FPGA
Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah
Abstract:
Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network design, FPGAs have higher speed and smaller size for real time application than the VLSI and DSP chips. So, many researchers have made great efforts on the realization of neural network (NN) using FPGA technique. In this paper, an introduction of ANN and FPGA technique are briefly shown. Also, Hardware Description Language (VHDL) code has been proposed to implement ANNs as well as to present simulation results with floating point arithmetic. Synthesis results for ANN controller are developed using Precision RTL. Proposed VHDL implementation creates a flexible, fast method and high degree of parallelism for implementing ANN. The implementation of multi-layer NN using lookup table LUT reduces the resource utilization for implementation and time for execution.
Keywords: Spacecraft, neural network, FPGA, VHDL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3009685 Long-Term Effect of Breastfeeding in Preschooler’s Psychomotor Development
Authors: Aurela Saliaj, Majlinda Zahaj, Bruna Pura
Abstract:
Background: Breast milk may impact early brain development, with potentially important biological, medical and social implications. There is an important discussion on which is the adequate breastfeeding extension to the development consolidation and how the children breastfeeding affects their psychomotor development, in the first year of life, and in following periods as well. Some special fats (LC PUFA) contained in breast milk play a key role in the brain’s maturation and cognitive development or social skills. These capacities created during breastfeeding time would be unfolded throughout all lifespan. Aim of the study: In our research, we have studied the effect of breastfeeding in preschooler's psychomotor assessment. Method: This study was conducted in a sample of 158 preschool children in Vlorë, Albania. We have measured the psychometric parameters of preschoolers with ASQ-3 (Age&Stage Questionnaires- 3). The studied sample was divided in three groups according to their breastfeeding duration (3, 6 and 12 months). Results: Children breastfed for only 3 months have definitely lower psychometric scores compared to the ones with 6 or more months of breastfeeding (respectively 217 to 239 ASQ-3 scores). Six and twelvemonth breastfed children have progressively more odds to have high levels of psychomotor development comparing to those with only 3 months of breastfeeding. The most affected psychometric domains by shortness of breastfeeding are Communication and Global motor. Conclusion: This leads to conclusion that to ensure high psychomotor parameters during childhood is necessary breastfeeding for at least 6 months.
Keywords: Breastfeeding, preschoolers, psycho-motor development, psycho-motor domains.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2348684 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques
Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian
Abstract:
Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.Keywords: Data mining, K-means, road traffic accidents, Waze, Weka.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1215683 Automatic Fluid-Structure Interaction Modeling and Analysis of Butterfly Valve Using Python Script
Authors: N. Guru Prasath, Sangjin Ma, Chang-Wan Kim
Abstract:
A butterfly valve is a quarter turn valve which is used to control the flow of a fluid through a section of pipe. Generally, butterfly valve is used in wide range of applications such as water distribution, sewage, oil and gas plants. In particular, butterfly valve with larger diameter finds its immense applications in hydro power plants to control the fluid flow. In-lieu with the constraints in cost and size to run laboratory setup, analysis of large diameter values will be mostly studied by computational method which is the best and inexpensive solution. For fluid and structural analysis, CFD and FEM software is used to perform large scale valve analyses, respectively. In order to perform above analysis in butterfly valve, the CAD model has to recreate and perform mesh in conventional software’s for various dimensions of valve. Therefore, its limitation is time consuming process. In-order to overcome that issue, python code was created to outcome complete pre-processing setup automatically in Salome software. Applying dimensions of the model clearly in the python code makes the running time comparatively lower and easier way to perform analysis of the valve. Hence, in this paper, an attempt was made to study the fluid-structure interaction (FSI) of butterfly valves by varying the valve angles and dimensions using python code in pre-processing software, and results are produced.
Keywords: Butterfly valve, fluid-structure interaction, automatic CFD analysis, flow coefficient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1297682 Investigating the Transformer Operating Conditions for Evaluating the Dielectric Response
Authors: Jalal M. Abdallah
Abstract:
This paper presents an experimental investigation of transformer dielectric response and solid insulation water content. The dielectric response was carried out on the base of Hybrid Frequency Dielectric Spectroscopy and Polarization Current measurements method (FDS &PC). The calculation of the water content in paper is based on the water content in oil and the obtained equilibrium curves. A reference measurements were performed at equilibrium conditions for water content in oil and paper of transformer at different stable temperatures (25, 50, 60 and 70°C) to prepare references to evaluate the insulation behavior at the not equilibrium conditions. Some measurements performed at the different simulated normal working modes of transformer operation at the same temperature where the equilibrium conditions. The obtained results show that when transformer temperature is mach more than the its ambient temperature, the transformer temperature decreases immediately after disconnecting the transformer from the network and this temperature reduction influences the transformer insulation condition in the measuring process. In addition to the oil temperature at the near places to the sensors, the temperature uniformity in transformer which can be changed by a big change in the load of transformer before the measuring time will influence the result. The investigations have shown that the extremely influence of the time between disconnecting the transformer and beginning the measurements on the results. And the online monitoring for water content in paper measurements, on the basis of the oil water content on line monitoring and the obtained equilibrium curves. The measurements where performed continuously and for about 50 days without any disconnection in the prepared the adiabatic room.Keywords: Conductivity, Moisture, Temperature, Oil-paperinsulation, Online monitoring, Water content in oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2647681 A Novel Method to Manufacture Superhydrophobic and Insulating Polyester Nanofibers via a Meso-Porous Aerogel Powder
Authors: Z. Mazrouei-Sebdani, A. Khoddami, H. Hadadzadeh, M. Zarrebini
Abstract:
In this research, waterglass based aerogel powder was prepared by sol–gel process and ambient pressure drying. Inspired by limited dust releasing, aerogel powder was introduced to the PET electrospinning solution in an attempt to create required bulk and surface structure for the nanofibers to improve their hydrophobic and insulation properties. The samples evaluation was carried out by measuring density, porosity, contact angle, heat transfer, FTIR, BET, and SEM. According to the results, porous silica aerogel powder was fabricated with mean pore diameter of 24 nm and contact angle of 145.9º. The results indicated the usefulness of the aerogel powder confined into nanofibers to control surface roughness for manipulating superhydrophobic nanowebs with water contact angle of 147º. It can be due to a multi-scale surface roughness which was created by nanowebs structure itself and nanofibers surface irregularity in presence of the aerogels while a layer of fluorocarbon created low surface energy. The wettability of a solid substrate is an important property that is controlled by both the chemical composition and geometry of the surface. Also, a decreasing trend in the heat transfer was observed from 22% for the nanofibers without any aerogel powder to 8% for the nanofibers with 4% aerogel powder. The development of thermal insulating materials has become increasingly more important than ever in view of the fossil energy depletion and global warming that call for more demanding energysaving practices.
Keywords: Superhydrophobicity, Insulation, Sol-gel, Surface energy, Roughness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2968680 Daily Probability Model of Storm Events in Peninsular Malaysia
Authors: Mohd Aftar Abu Bakar, Noratiqah Mohd Ariff, Abdul Aziz Jemain
Abstract:
Storm Event Analysis (SEA) provides a method to define rainfalls events as storms where each storm has its own amount and duration. By modelling daily probability of different types of storms, the onset, offset and cycle of rainfall seasons can be determined and investigated. Furthermore, researchers from the field of meteorology will be able to study the dynamical characteristics of rainfalls and make predictions for future reference. In this study, four categories of storms; short, intermediate, long and very long storms; are introduced based on the length of storm duration. Daily probability models of storms are built for these four categories of storms in Peninsular Malaysia. The models are constructed by using Bernoulli distribution and by applying linear regression on the first Fourier harmonic equation. From the models obtained, it is found that daily probability of storms at the Eastern part of Peninsular Malaysia shows a unimodal pattern with high probability of rain beginning at the end of the year and lasting until early the next year. This is very likely due to the Northeast monsoon season which occurs from November to March every year. Meanwhile, short and intermediate storms at other regions of Peninsular Malaysia experience a bimodal cycle due to the two inter-monsoon seasons. Overall, these models indicate that Peninsular Malaysia can be divided into four distinct regions based on the daily pattern for the probability of various storm events.
Keywords: Daily probability model, monsoon seasons, regions, storm events.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632679 A Software Framework for Predicting Oil-Palm Yield from Climate Data
Authors: Mohd. Noor Md. Sap, A. Majid Awan
Abstract:
Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979678 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore
Authors: Ronal Muresano, Andrea Pagano
Abstract:
Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.
Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900677 Optimization of Process Parameters for Friction Stir Welding of Cast Alloy AA7075 by Taguchi Method
Authors: Dhairya Partap Sing, Vikram Singh, Sudhir Kumar
Abstract:
This investigation proposes Friction stir welding technique to solve the fusion welding problems. Objectives of this investigation are fabrication of AA7075-10%wt. Silicon carbide (SiC) aluminum metal matrix composite and optimization of optimal process parameters of friction stir welded AA7075-10%wt. SiC Composites. Composites were prepared by the mechanical stir casting process. Experiments were performed with four process parameters such as tool rotational speed, weld speed, axial force and tool geometry considering three levels of each. The quality characteristics considered is joint efficiency (JE). The welding experiments were conducted using L27 orthogonal array. An orthogonal array and design of experiments were used to give best possible welding parameters that give optimal JE. The fabricated welded joints using rotational speed of 1500 rpm, welding speed (1.3 mm/sec), axial force (7 k/n) of and tool geometry (square) give best possible results. Experimental result reveals that the tool rotation speed, welding speed and axial force are the significant process parameters affecting the welding performance. The predicted optimal value of percentage JE is 95.621. The confirmation tests also have been done for verifying the results.
Keywords: Metal matrix composite, axial force, joint efficiency, rotational speed, traverse speed, tool geometry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 869676 Vitamin D Deficiency and Insufficiency in Postmenopausal Women with Obesity
Authors: Vladyslav Povoroznyuk, Anna Musiienko, Nataliia Dzerovych, Roksolana Povoroznyuk, Oksana Ivanyk
Abstract:
Deficiency and insufficiency of Vitamin D is a pandemic of the 21st century. Obesity patients have a lower level of vitamin D, but the literature data are contradictory. The purpose of this study is to investigate deficiency and insufficiency vitamin D in postmenopausal women with obesity. We examined 1007 women aged 50-89 years. Mean age was 65.74±8.61 years; mean height was 1.61±0.07 m; mean weight was 70.65±13.50 kg; mean body mass index was 27.27±4.86 kg/m2, and mean 25(OH) D levels in serum was 26.00±12.00 nmol/l. The women were divided into the following six groups depending on body mass index: I group – 338 women with normal body weight, II group – 16 women with insufficient body weight, III group – 382 women with excessive body weight, IV group – 199 women with obesity of class I, V group – 60 women with obesity of class II, and VI group – 12 women with obesity of class III. Level of 25(OH)D in serum was measured by means of an electrochemiluminescent method - Elecsys 2010 analyzer (Roche Diagnostics, Germany) and cobas test-systems. 34.4% of the examined women have deficiency of vitamin D and 31.4% insufficiency. Women with obesity of class I (23.60±10.24 ng/ml) and obese of class II (22.38±10.34 ng/ml) had significantly lower levels of 25 (OH) D compared to women with normal body weight (28.24±12.99 ng/ml), p=0.00003. In women with obesity, BMI significantly influences vitamin D level, and this influence does not depend on the season.
Keywords: Obesity, body mass index, vitamin D deficiency/insufficiency, postmenopausal women, age.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1059675 X-Ray Intensity Measurement Using Frequency Output Sensor for Computed Tomography
Authors: R. M. Siddiqui, D. Z. Moghaddam, T. R. Turlapati, S. H. Khan, I. Ul Ahad
Abstract:
Quality of 2D and 3D cross-sectional images produce by Computed Tomography primarily depend upon the degree of precision of primary and secondary X-Ray intensity detection. Traditional method of primary intensity detection is apt to errors. Recently the X-Ray intensity measurement system along with smart X-Ray sensors is developed by our group which is able to detect primary X-Ray intensity unerringly. In this study a new smart X-Ray sensor is developed using Light-to-Frequency converter TSL230 from Texas Instruments which has numerous advantages in terms of noiseless data acquisition and transmission. TSL230 construction is based on a silicon photodiode which converts incoming X-Ray radiation into the proportional current signal. A current to frequency converter is attached to this photodiode on a single monolithic CMOS integrated circuit which provides proportional frequency count to incoming current signal in the form of the pulse train. The frequency count is delivered to the center of PICDEM FS USB board with PIC18F4550 microcontroller mounted on it. With highly compact electronic hardware, this Demo Board efficiently read the smart sensor output data. The frequency output approaches overcome nonlinear behavior of sensors with analog output thus un-attenuated X-Ray intensities could be measured precisely and better normalization could be acquired in order to attain high resolution.Keywords: Computed tomography, detector technology, X-Ray intensity measurement
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2609674 Quantifying the Methods of Monitoring Timers in Electric Water Heater for Grid Balancing on Demand Side Management: A Systematic Mapping Review
Authors: Yamamah Abdulrazaq, Lahieb A. Abrahim, Samuel E. Davies, Iain Shewring
Abstract:
Electric water heater (EWH) is a powerful appliance that uses electricity in residential, commercial, and industrial settings, and the ability to control them properly will result in cost savings and the prevention of blackouts on the national grid. This article discusses the usage of timers in EWH control strategies for demand-side management (DSM). To the authors' knowledge, there is no systematic mapping review focusing on the utilization of EWH control strategies in DSM has yet been conducted. Consequently, the purpose of this research is to identify and examine main papers exploring EWH procedures in DSM by quantifying and categorizing information with regard to publication year and source, kind of methods, and source of data for monitoring control techniques. In order to answer the research questions, a total of 31 publications published between 1999 and 2023 were selected depending on specific inclusion and exclusion criteria. The data indicate that direct load control (DLC) has been somewhat more prevalent than indirect load control (ILC). Additionally, the mix method is much lower than the other techniques, and the proportion of real-time data (RTD) to non-real-time data (NRTD) is about equal.
Keywords: Demand side management, direct load control, electric water heater, indirect load control, non-real-time data, real time data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112673 Effect of Buoyancy Ratio on Non-Darcy Mixed Convection in a Vertical Channel: A Thermal Non-equilibrium Approach
Authors: Manish K. Khandelwal, P. Bera, A. Chakrabarti
Abstract:
This article presents a numerical study of the doublediffusive mixed convection in a vertical channel filled with porous medium by using non-equilibrium model. The flow is assumed fully developed, uni-directional and steady state. The controlling parameters are thermal Rayleigh number (RaT ), Darcy number (Da), Forchheimer number (F), buoyancy ratio (N), inter phase heat transfer coefficient (H), and porosity scaled thermal conductivity ratio (γ). The Brinkman-extended non-Darcy model is considered. The governing equations are solved by spectral collocation method. The main emphasize is given on flow profiles as well as heat and solute transfer rates, when two diffusive components in terms of buoyancy ratio are in favor (against) of each other and solid matrix and fluid are thermally non-equilibrium. The results show that, for aiding flow (RaT = 1000), the heat transfer rate of fluid (Nuf ) increases upto a certain value of H, beyond that decreases smoothly and converges to a constant, whereas in case of opposing flow (RaT = -1000), the result is same for N = 0 and 1. The variation of Nuf in (N, Nuf )-plane shows sinusoidal pattern for RaT = -1000. For both cases (aiding and opposing) the flow destabilize on increasing N by inviting point of inflection or flow separation on the velocity profile. Overall, the buoyancy force have significant impact on the non-Darcy mixed convection under LTNE conditions.Keywords: buoyancy ratio, mixed convection, non-Darcy model, thermal non-equilibrium
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958672 MHD Chemically Reacting Viscous Fluid Flow towards a Vertical Surface with Slip and Convective Boundary Conditions
Authors: Ibrahim Yakubu Seini, Oluwole Daniel Makinde
Abstract:
MHD chemically reacting viscous fluid flow towards a vertical surface with slip and convective boundary conditions has been conducted. The temperature and the chemical species concentration of the surface and the velocity of the external flow are assumed to vary linearly with the distance from the vertical surface. The governing differential equations are modeled and transformed into systems of ordinary differential equations, which are then solved numerically by a shooting method. The effects of various parameters on the heat and mass transfer characteristics are discussed. Graphical results are presented for the velocity, temperature, and concentration profiles whilst the skin-friction coefficient and the rate of heat and mass transfers near the surface are presented in tables and discussed. The results revealed that increasing the strength of the magnetic field increases the skin-friction coefficient and the rate of heat and mass transfers toward the surface. The velocity profiles are increased towards the surface due to the presence of the Lorenz force, which attracts the fluid particles near the surface. The rate of chemical reaction is seen to decrease the concentration boundary layer near the surface due to the destructive chemical reaction occurring near the surface.Keywords: Boundary layer, surface slip, MHD flow, chemical reaction, heat transfer, mass transfer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2238671 Ideological Framing in Television News: The Case of “Settlement Process”
Authors: Mete Kazaz, Birol Gülnar
Abstract:
Television news has gained a new dimension in terms of ideological approaches as a result of such factors as globalization, cross monopolization, presence of international companies etc. and certain strategies have been developed at the production, presentation and distribution stages of news. In this study, television news about a process called “settlement process” was investigated. In this framework, news about the settlement process on TV channels of TRT 1, ATV, FOX TV, NTV, HABERTÜRK, TRT HABER and STV was investigated using the content analysis method in terms of the strategies the ideology construction, attitude towards the party in power, attitude towards parties in opposition and attitude towards BDP (Peace and Democracy Part) and Imrali (the island where Abdullah Ocalan, head of PKK, is kept). First, the aforementioned TV channels were selected randomly from 3 groups in order to be able to reveal the representational capacity of commercial, news and public channels. The study covers 557 news items broadcast in the main news bulletins between the dates of 15 March 2013 and 15 March 2013. While there was a positive attitude towards the government in a sizable portion of the news about the settlement process (63.6%), the attitude of 25.3% of the news was impartial towards the government and 11.3% had a negative attitude. On the other hand, there was a negative attitude towards the Opposition in a considerable portion of the news about the settlement process (56.1%). The attitude of 35.9% of the news towards the Opposition was impartial whereas 8.0% had a positive attitude. While 34.9% of the news about the settlement process used the legitimization strategy from among the ideology construction strategies, 22.8% used the unification strategy, 15.7% the reification strategy, 15.6% fractional and 11% concealment/mystification strategy.
Keywords: Attitude, Ideological Framing, Television News.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782670 A Comparative Study of Fine Grained Security Techniques Based on Data Accessibility and Inference
Authors: Azhar Rauf, Sareer Badshah, Shah Khusro
Abstract:
This paper analyzes different techniques of the fine grained security of relational databases for the two variables-data accessibility and inference. Data accessibility measures the amount of data available to the users after applying a security technique on a table. Inference is the proportion of information leakage after suppressing a cell containing secret data. A row containing a secret cell which is suppressed can become a security threat if an intruder generates useful information from the related visible information of the same row. This paper measures data accessibility and inference associated with row, cell, and column level security techniques. Cell level security offers greatest data accessibility as it suppresses secret data only. But on the other hand, there is a high probability of inference in cell level security. Row and column level security techniques have least data accessibility and inference. This paper introduces cell plus innocent security technique that utilizes the cell level security method but suppresses some innocent data to dodge an intruder that a suppressed cell may not necessarily contain secret data. Four variations of the technique namely cell plus innocent 1/4, cell plus innocent 2/4, cell plus innocent 3/4, and cell plus innocent 4/4 respectively have been introduced to suppress innocent data equal to 1/4, 2/4, 3/4, and 4/4 percent of the true secret data inside the database. Results show that the new technique offers better control over data accessibility and inference as compared to the state-of-theart security techniques. This paper further discusses the combination of techniques together to be used. The paper shows that cell plus innocent 1/4, 2/4, and 3/4 techniques can be used as a replacement for the cell level security.
Keywords: Fine Grained Security, Data Accessibility, Inference, Row, Cell, Column Level Security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471669 Construction 4.0: The Future of the Construction Industry in South Africa
Authors: Temidayo. O. Osunsanmi, Clinton Aigbavboa, Ayodeji Oke
Abstract:
The construction industry is a renowned latecomer to the efficiency offered by the adoption of information technology. Whereas, the banking, manufacturing, retailing industries have keyed into the future by using digitization and information technology as a new approach for ensuring competitive gain and efficiency. The construction industry has yet to fully realize similar benefits because the adoption of ICT is still at the infancy stage with a major concentration on the use of software. Thus, this study evaluates the awareness and readiness of construction professionals towards embracing a full digitalization of the construction industry using construction 4.0. The term ‘construction 4.0’ was coined from the industry 4.0 concept which is regarded as the fourth industrial revolution that originated from Germany. A questionnaire was utilized for sourcing data distributed to practicing construction professionals through a convenience sampling method. Using SPSS v24, the hypotheses posed were tested with the Mann Whitney test. The result revealed that there are no differences between the consulting and contracting organizations on the readiness for adopting construction 4.0 concepts in the construction industry. Using factor analysis, the study discovers that adopting construction 4.0 will improve the performance of the construction industry regarding cost and time savings and also create sustainable buildings. In conclusion, the study determined that construction professionals have a low awareness towards construction 4.0 concepts. The study recommends an increase in awareness of construction 4.0 concepts through seminars, workshops and training, while construction professionals should take hold of the benefits of adopting construction 4.0 concepts. The study contributes to the roadmap for the implementation of construction industry 4.0 concepts in the South African construction industry.
Keywords: Building information technology, Construction 4.0, Industry 4.0, Smart Site.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5814668 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform
Authors: Vijaya Prakash.A.M, K.S.Gurumurthy
Abstract:
In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3139667 Design of Smart Urban Lighting by Using Social Sustainability Approach
Authors: Mohsen Noroozi, Maryam Khalili
Abstract:
Creating cities, objects and spaces that are economically, environmentally and socially sustainable and which meet the challenge of social interaction and generation change will be one of the biggest tasks of designers. Social sustainability is about how individuals, communities and societies live with each other and set out to achieve the objectives of development model which they have chosen for themselves. Urban lightning as one of the most important elements of urban furniture that people constantly interact with it in public spaces; can be a significant object for designers. Using intelligence by internet of things for urban lighting makes it more interactive in public environments. It can encourage individuals to carry out appropriate behaviors and provides them the social awareness through new interactions. The greatest strength of this technology is its strong impact on many aspects of everyday life and users' behaviors. The analytical phase of the research is based on a multiple method survey strategy. Smart lighting proposed in this paper is an urban lighting designed on results obtained from a collective point of view about the social sustainability. In this paper, referring to behavioral design methods, the social behaviors of the people has been studied. Data show that people demands for a deeper experience of social participation, safety perception and energy saving with the meaningful use of interactive and colourful lighting effects. By using intelligent technology, some suggestions are provided in the field of future lighting to consider the new forms of social sustainability.
Keywords: Behavior model, internet of things, social sustainability, urban lighting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 928666 A New Composition Method of Admissible Support Vector Kernel Based on Reproducing Kernel
Authors: Wei Zhang, Xin Zhao, Yi-Fan Zhu, Xin-Jian Zhang
Abstract:
Kernel function, which allows the formulation of nonlinear variants of any algorithm that can be cast in terms of dot products, makes the Support Vector Machines (SVM) have been successfully applied in many fields, e.g. classification and regression. The importance of kernel has motivated many studies on its composition. It-s well-known that reproducing kernel (R.K) is a useful kernel function which possesses many properties, e.g. positive definiteness, reproducing property and composing complex R.K by simple operation. There are two popular ways to compute the R.K with explicit form. One is to construct and solve a specific differential equation with boundary value whose handicap is incapable of obtaining a unified form of R.K. The other is using a piecewise integral of the Green function associated with a differential operator L. The latter benefits the computation of a R.K with a unified explicit form and theoretical analysis, whereas there are relatively later studies and fewer practical computations. In this paper, a new algorithm for computing a R.K is presented. It can obtain the unified explicit form of R.K in general reproducing kernel Hilbert space. It avoids constructing and solving the complex differential equations manually and benefits an automatic, flexible and rigorous computation for more general RKHS. In order to validate that the R.K computed by the algorithm can be used in SVM well, some illustrative examples and a comparison between R.K and Gaussian kernel (RBF) in support vector regression are presented. The result shows that the performance of R.K is close or slightly superior to that of RBF.
Keywords: admissible support vector kernel, reproducing kernel, reproducing kernel Hilbert space, Green function, support vectorregression
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544665 Design for Manufacturability and Concurrent Engineering for Product Development
Authors: Alemu Moges Belay
Abstract:
In the 1980s, companies began to feel the effect of three major influences on their product development: newer and innovative technologies, increasing product complexity and larger organizations. And therefore companies were forced to look for new product development methods. This paper tries to focus on the two of new product development methods (DFM and CE). The aim of this paper is to see and analyze different product development methods specifically on Design for Manufacturability and Concurrent Engineering. Companies can achieve and be benefited by minimizing product life cycle, cost and meeting delivery schedule. This paper also presents simplified models that can be modified and used by different companies based on the companies- objective and requirements. Methodologies that are followed to do this research are case studies. Two companies were taken and analysed on the product development process. Historical data, interview were conducted on these companies in addition to that, Survey of literatures and previous research works on similar topics has been done during this research. This paper also tries to show the implementation cost benefit analysis and tries to calculate the implementation time. From this research, it has been found that the two companies did not achieve the delivery time to the customer. Some of most frequently coming products are analyzed and 50% to 80 % of their products are not delivered on time to the customers. The companies are following the traditional way of product development that is sequentially design and production method, which highly affect time to market. In the case study it is found that by implementing these new methods and by forming multi disciplinary team in designing and quality inspection; the company can reduce the workflow steps from 40 to 30.
Keywords: Design for manufacturability, Concurrent Engineering, Time-to-Market, Product development
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5586664 Prediction of Cutting Tool Life in Drilling of Reinforced Aluminum Alloy Composite Using a Fuzzy Method
Authors: Mohammed T. Hayajneh
Abstract:
Machining of Metal Matrix Composites (MMCs) is very significant process and has been a main problem that draws many researchers to investigate the characteristics of MMCs during different machining process. The poor machining properties of hard particles reinforced MMCs make drilling process a rather interesting task. Unlike drilling of conventional materials, many problems can be seriously encountered during drilling of MMCs, such as tool wear and cutting forces. Cutting tool wear is a very significant concern in industries. Cutting tool wear not only influences the quality of the drilled hole, but also affects the cutting tool life. Prediction the cutting tool life during drilling is essential for optimizing the cutting conditions. However, the relationship between tool life and cutting conditions, tool geometrical factors and workpiece material properties has not yet been established by any machining theory. In this research work, fuzzy subtractive clustering system has been used to model the cutting tool life in drilling of Al2O3 particle reinforced aluminum alloy composite to investigate of the effect of cutting conditions on cutting tool life. This investigation can help in controlling and optimizing of cutting conditions when the process parameters are adjusted. The built model for prediction the tool life is identified by using drill diameter, cutting speed, and cutting feed rate as input data. The validity of the model was confirmed by the examinations under various cutting conditions. Experimental results have shown the efficiency of the model to predict cutting tool life.
Keywords: Composite, fuzzy, tool life, wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2088663 Enhancement of Rice Straw Composting Using UV Induced Mutants of Penicillium Strain
Authors: T. N. M. El Sebai, A. A.Khattab, Wafaa M. Abd-El Rahim, H. Moawad
Abstract:
Fungal mutant strains have produced cellulase and xylanase enzymes, and have induced high hydrolysis with enhanced of rice straw. The mutants were obtained by exposing Penicillium strain to UV-light treatments. Screening and selection after treatment with UV-light were carried out using cellulolytic and xylanolytic clear zones method to select the hypercellulolytic and hyperxylanolytic mutants. These mutants were evaluated for their cellulase and xylanase enzyme production as well as their abilities for biodegradation of rice straw. The mutant 12 UV/1 produced 306.21% and 209.91% cellulase and xylanase, respectively, as compared with the original wild type strain. This mutant showed high capacity of rice straw degradation. The effectiveness of tested mutant strain and that of wild strain was compared in relation to enhancing the composting process of rice straw and animal manures mixture. The results obtained showed that the compost product of inoculated mixture with mutant strain (12 UV/1) was the best compared to the wild strain and un-inoculated mixture. Analysis of the composted materials showed that the characteristics of the produced compost were close to those of the high quality standard compost. The results obtained in the present work suggest that the combination between rice straw and animal manure could be used for enhancing the composting process of rice straw and particularly when applied with fungal decomposer accelerating the composting process.
Keywords: Rice straw, composting, UV mutants, Penicillium.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1823662 The Analyses of July 15 Coup Attempt through the Turkish Press
Authors: Yasemin Gülşen Yılmaz, Süleyman Hakan Yılmaz, Muhammet Erbay
Abstract:
Military interventions have an important place in the Turkish Political History. Military interventions are commonly called coup in the society. By coup we mean that the armed forces seize political power either by a group of officer in the army or by chain of command. Coups not only weaken but also suspend the democracy in a country. All periods of coup created its own victims. Two military coups which took place in May 27, 1960 and September 12, 1980 are the most important ones in terms of political and social effect in the Turkish Political History. Apart these, March 12, 1971, February 28, 1997 and April 27, 2007 e-memorandum are the periods when Army submitted a memorandum and intervened the political government indirectly. Beside the memorandums and coups there were also many coup attempts that have been experienced in the Turkish Political History. In this study, we examined the coup attempted by FETO’s military members in the evening of July 15, 2016 from the point of the Turkish Press. Cumhuriyet, Haber Türk, Hürriyet, Milliyet, Sabah, Star, Yeni Akit and Yeni Şafak Newspapers which have different publication policies were examined within the scope of the study. The first pages of the newspapers dated July 16, 2016 were examined using content analysis method. The headlines, news, news headlines and the visual materials used for news were examined and the collected data were analysed.
Keywords: July 15, news, military coup, press.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1197