Search results for: software modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8007

Search results for: software modeling

2007 Influence of Foundation Size on Seismic Response of Mid-rise Buildings Considering Soil-Structure-Interaction

Authors: Quoc Van Nguyen, Behzad Fatahi, Aslan S. Hokmabadi

Abstract:

Performance based seismic design is a modern approach to earthquake-resistant design shifting emphasis from “strength” to “performance”. Soil-Structure Interaction (SSI) can influence the performance level of structures significantly. In this paper, a fifteen storey moment resisting frame sitting on a shallow foundation (footing) with different sizes is simulated numerically using ABAQUS software. The developed three dimensional numerical simulation accounts for nonlinear behaviour of the soil medium by considering the variation of soil stiffness and damping as a function of developed shear strain in the soil elements during earthquake. Elastic-perfectly plastic model is adopted to simulate piles and structural elements. Quiet boundary conditions are assigned to the numerical model and appropriate interface elements, capable of modelling sliding and separation between the foundation and soil elements, are considered. Numerical results in terms of base shear, lateral deformations, and inter-storey drifts of the structure are compared for the cases of soil-structure interaction system with different foundation sizes as well as fixed base condition (excluding SSI). It can be concluded that conventional design procedures excluding SSI may result in aggressive design. Moreover, the size of the foundation can influence the dynamic characteristics and seismic response of the building due to SSI and should therefore be given careful consideration in order to ensure a safe and cost effective seismic design.

Keywords: soil-structure-interaction, seismic response, shallow foundation, abaqus, rayleigh damping

Procedia PDF Downloads 494
2006 Potential Effects of Climate Change on Streamflow, Based on the Occurrence of Severe Floods in Kelantan, East Coasts of Peninsular Malaysia River Basin

Authors: Muhd. Barzani Gasim, Mohd. Ekhwan Toriman, Mohd. Khairul Amri Kamarudin, Azman Azid, Siti Humaira Haron, Muhammad Hafiz Md. Saad

Abstract:

Malaysia is a country in Southeast Asia that constantly exposed to flooding and landslide. The disaster has caused some troubles such loss of property, loss of life and discomfort of people involved. This problem occurs as a result of climate change leading to increased stream flow rate as a result of disruption to regional hydrological cycles. The aim of the study is to determine hydrologic processes in the east coasts of Peninsular Malaysia, especially in Kelantan Basin. Parameterized to account for the spatial and temporal variability of basin characteristics and their responses to climate variability. For hydrological modeling of the basin, the Soil and Water Assessment Tool (SWAT) model such as relief, soil type, and its use, and historical daily time series of climate and river flow rates are studied. The interpretation of Landsat map/land uses will be applied in this study. The combined of SWAT and climate models, the system will be predicted an increase in future scenario climate precipitation, increase in surface runoff, increase in recharge and increase in the total water yield. As a result, this model has successfully developed the basin analysis by demonstrating analyzing hydrographs visually, good estimates of minimum and maximum flows and severe floods observed during calibration and validation periods.

Keywords: east coasts of Peninsular Malaysia, Kelantan river basin, minimum and maximum flows, severe floods, SWAT model

Procedia PDF Downloads 251
2005 Exploring the Factors Affecting the Presence of Farmers’ Markets in Rural British Columbia

Authors: Amirmohsen Behjat, Aleck Ostry, Christina Miewald, Bernie Pauly

Abstract:

Farmers’ Markets have become one of the important healthy food suppliers in both rural communities and urban settings. Farmers’ markets are evolving and their number has rapidly increased in the past decade. Despite this drastic increase, the distribution of the farmers’ markets is not even across different areas. The main goal of this study is to explore the socioeconomic, geographic, and demographic variables which affect the establishment of farmers’ market in rural communities in British Columbia (BC). Thus, the data on available farmers’ markets in rural areas were collected from BC Association of Farmers’ Markets and spatially joined to BC map at Dissemination Area (DA) level using ArcGIS software to link the farmers’ market to the respective communities that they serve. Then, in order to investigate this issue and understand which rural communities farmer’ markets tend to operate, a binary logistic regression analysis was performed with the availability of farmer’ markets at DA-level as dependent variable and Deprivation Index (DI), Metro Influence Zone (MIZ) and population as independent variables. The results indicated that DI and MIZ variables are not statistically significant whereas the population is the only which had a significant contribution in predicting the availability of farmers’ markets in rural BC. Moreover, this study found that farmers’ markets usually do not operate in rural food deserts where other healthy food providers such as supermarkets and grocery stores are non-existent. In conclusion, the presence of farmers markets is not associated with socioeconomic and geographic characteristics of rural communities in BC, but farmers’ markets tend to operate in more populated rural communities in BC.

Keywords: farmers’ markets, socioeconomic and demographic variables, metro influence zone, logistic regression, ArcGIS

Procedia PDF Downloads 180
2004 Modeling Socioeconomic and Political Dynamics of Terrorism in Pakistan

Authors: Syed Toqueer, Omer Younus

Abstract:

Terrorism, today, has emerged as a global menace with Pakistan being the most adversely affected state. Therefore, the motive behind this study is to empirically establish the linkage of terrorism with socio-economic (uneven income distribution, poverty and unemployment) and political nexuses so that a policy recommendation can be put forth to better approach this issue in Pakistan. For this purpose, the study employs two competing models, namely, the distributed lag model and OLS, so that findings of the model may be consolidated comprehensively, over the reference period of 1984-2012. The findings of both models are indicative of the fact that uneven income distribution of Pakistan is rather a contributing factor towards terrorism when measured through GDP per capita. This supports the hypothesis that immiserizing modernization theory is applicable for the state of Pakistan where the underprivileged are marginalized. Results also suggest that other socio-economic variables (poverty, unemployment and consumer confidence) can condense the brutality of terrorism once these conditions are catered to and improved. The rational of opportunity cost is at the base of this argument. Poor conditions of employment and poverty reduces the opportunity cost for individuals to be recruited by terrorist organizations as economic returns are considerably low and thus increasing the supply of volunteers and subsequently increasing the intensity of terrorism. The argument of political freedom as a means of lowering terrorism stands true. The more the people are politically repressed the more alternative and illegal means they will find to make their voice heard. Also, the argument that politically transitioning economy faces more terrorism is found applicable for Pakistan. Finally, the study contributes to an ongoing debate on which of the two set of factors are more significant with relation to terrorism by suggesting that socio-economic factors are found to be the primary causes of terrorism for Pakistan.

Keywords: terrorism, socioeconomic conditions, political freedom, distributed lag model, ordinary least square

Procedia PDF Downloads 311
2003 Optimum Structural Wall Distribution in Reinforced Concrete Buildings Subjected to Earthquake Excitations

Authors: Nesreddine Djafar Henni, Akram Khelaifia, Salah Guettala, Rachid Chebili

Abstract:

Reinforced concrete shear walls and vertical plate-like elements play a pivotal role in efficiently managing a building's response to seismic forces. This study investigates how the performance of reinforced concrete buildings equipped with shear walls featuring different shear wall-to-frame stiffness ratios aligns with the requirements stipulated in the Algerian seismic code RPA99v2003, particularly in high-seismicity regions. Seven distinct 3D finite element models are developed and evaluated through nonlinear static analysis. Engineering Demand Parameters (EDPs) such as lateral displacement, inter-story drift ratio, shear force, and bending moment along the building height are analyzed. The findings reveal two predominant categories of induced responses: force-based and displacement-based EDPs. Furthermore, as the shear wall-to-frame ratio increases, there is a concurrent increase in force-based EDPs and a decrease in displacement-based ones. Examining the distribution of shear walls from both force and displacement perspectives, model G with the highest stiffness ratio, concentrating stiffness at the building's center, intensifies induced forces. This configuration necessitates additional reinforcements, leading to a conservative design approach. Conversely, model C, with the lowest stiffness ratio, distributes stiffness towards the periphery, resulting in minimized induced shear forces and bending moments, representing an optimal scenario with maximal performance and minimal strength requirements.

Keywords: dual RC buildings, RC shear walls, modeling, static nonlinear pushover analysis, optimization, seismic performance

Procedia PDF Downloads 41
2002 The Impact of Surface Roughness and PTFE/TiF3/FeF3 Additives in Plain ZDDP Oil on the Friction and Wear Behavior Using Thermal and Tribological Analysis under Extreme Pressure Condition

Authors: Gabi N. Nehme, Saeed Ghalambor

Abstract:

The use of titanium fluoride and iron fluoride (TiF3/FeF3) catalysts in combination with polutetrafluoroethylene (PTFE) in plain zinc dialkyldithiophosphate (ZDDP) oil is important for the study of engine tribocomponents and is increasingly a strategy to improve the formation of tribofilm and to provide low friction and excellent wear protection in reduced phosphorus plain ZDDP oil. The influence of surface roughness and the concentration of TiF3/FeF3/PTFE were investigated using bearing steel samples dipped in lubricant solution @100°C for two different heating time durations. This paper addresses the effects of water drop contact angle using different surface finishes after treating them with different lubricant combination. The calculated water drop contact angles were analyzed using Design of Experiment software (DOE) and it was determined that a 0.05 μm Ra surface roughness would provide an excellent TiF3/FeF3/PTFE coating for antiwear resistance as reflected in the scanning electron microscopy (SEM) images and the tribological testing under extreme pressure conditions. Both friction and wear performance depend greatly on the PTFE/and catalysts in plain ZDDP oil with 0.05% phosphorous and on the surface finish of bearing steel. The friction and wear reducing effects, which was observed in the tribological tests, indicated a better micro lubrication effect of the 0.05 μm Ra surface roughness treated at 100°C for 24 hours when compared to the 0.1 μm Ra surface roughness with the same treatment.

Keywords: scanning electron microscopy, ZDDP, catalysts, PTFE, friction, wear

Procedia PDF Downloads 342
2001 Analyzing the Effect of Materials’ Selection on Energy Saving and Carbon Footprint: A Case Study Simulation of Concrete Structure Building

Authors: M. Kouhirostamkolaei, M. Kouhirostami, M. Sam, J. Woo, A. T. Asutosh, J. Li, C. Kibert

Abstract:

Construction is one of the most energy consumed activities in the urban environment that results in a significant amount of greenhouse gas emissions around the world. Thus, the impact of the construction industry on global warming is undeniable. Thus, reducing building energy consumption and mitigating carbon production can slow the rate of global warming. The purpose of this study is to determine the amount of energy consumption and carbon dioxide production during the operation phase and the impact of using new shells on energy saving and carbon footprint. Therefore, a residential building with a re-enforced concrete structure is selected in Babolsar, Iran. DesignBuilder software has been used for one year of building operation to calculate the amount of carbon dioxide production and energy consumption in the operation phase of the building. The primary results show the building use 61750 kWh of energy each year. Computer simulation analyzes the effect of changing building shells -using XPS polystyrene and new electrochromic windows- as well as changing the type of lighting on energy consumption reduction and subsequent carbon dioxide production. The results show that the amount of energy and carbon production during building operation has been reduced by approximately 70% by applying the proposed changes. The changes reduce CO2e to 11345 kg CO2/yr. The result of this study helps designers and engineers to consider material selection’s process as one of the most important stages of design for improving energy performance of buildings.

Keywords: construction materials, green construction, energy simulation, carbon footprint, energy saving, concrete structure, designbuilder

Procedia PDF Downloads 183
2000 Demand for Care in Primary Health Care in the Governorate of Ariana: Results of a Survey in Ariana Primary Health Care and Comparison with the Last 30 Years

Authors: Chelly Souhir, Harizi Chahida, Hachaichi Aicha, Aissaoui Sihem, Chahed Mohamed Kouni

Abstract:

Introduction: In Tunisia, few studies have attempted to describe the demand for primary care in a standardized and systematic way. The purpose of this study is to describe the main reasons for demand for care in primary health care, through a survey of the Ariana Governorate PHC and to identify their evolutionary trend compared to last 30 years, reported by studies of the same type. Materials and methods: This is a cross-sectional descriptive study which concerns the study of consultants in the first line of the governorate of Ariana and their use of care recorded during 2 days in the same week during the month of May 2016, in each of these PHC. The same data collection sheet was used in all CSBs. The coding of the information was done according to the International Classification of Primary Care (ICPC). The data was entered and analyzed by the EPI Info 7 software. Results: Our study found that the most common ICPC chapters are respiratory (42%) and digestive (13.2%). In 1996 were the respiratory (43.5%) and circulatory (7.8%). In 2000, we found also the respiratory (39,6%) and circulatory (10,9%). In 2002, respiratory (43%) and digestive (10.1%) motives were the most frequent. According to the ICPC, the pathologies in our study were acute angina (19%), acute bronchitis and bronchiolitis (8%). In 1996, it was tonsillitis ( 21.6%) and acute bronchitis (7.2%). For Ben Abdelaziz in 2000, tonsillitis (14.5%) follow by acute bronchitis (8.3%). In 2002, acute angina (15.7%), acute bronchitis and bronchiolitis (11.2%) were the most common. Conclusion: Acute angina and tonsillitis are the most common in all studies conducted in Tunisia.

Keywords: acute angina, classification of primary care, primary health care, tonsillitis, Tunisia

Procedia PDF Downloads 511
1999 Assessing Denitrification-Disintegration Model’s Efficacy in Simulating Greenhouse Gas Emissions, Crop Growth, Yield, and Soil Biochemical Processes in Moroccan Context

Authors: Mohamed Boullouz, Mohamed Louay Metougui

Abstract:

Accurate modeling of greenhouse gas (GHG) emissions, crop growth, soil productivity, and biochemical processes is crucial considering escalating global concerns about climate change and the urgent need to improve agricultural sustainability. The application of the denitrification-disintegration (DNDC) model in the context of Morocco's unique agro-climate is thoroughly investigated in this study. Our main research hypothesis is that the DNDC model offers an effective and powerful tool for precisely simulating a wide range of significant parameters, including greenhouse gas emissions, crop growth, yield potential, and complex soil biogeochemical processes, all consistent with the intricate features of environmental Moroccan agriculture. In order to verify these hypotheses, a vast amount of field data covering Morocco's various agricultural regions and encompassing a range of soil types, climatic factors, and crop varieties had to be gathered. These experimental data sets will serve as the foundation for careful model calibration and subsequent validation, ensuring the accuracy of simulation results. In conclusion, the prospective research findings add to the global conversation on climate-resilient agricultural practices while encouraging the promotion of sustainable agricultural models in Morocco. A policy architect's and an agricultural actor's ability to make informed decisions that not only advance food security but also environmental stability may be strengthened by the impending recognition of the DNDC model as a potent simulation tool tailored to Moroccan conditions.

Keywords: greenhouse gas emissions, DNDC model, sustainable agriculture, Moroccan cropping systems

Procedia PDF Downloads 51
1998 Parametric Models of Facade Designs of High-Rise Residential Buildings

Authors: Yuchen Sharon Sung, Yingjui Tseng

Abstract:

High-rise residential buildings have become the most mainstream housing pattern in the world’s metropolises under the current trend of urbanization. The facades of high-rise buildings are essential elements of the urban landscape. The skins of these facades are important media between the interior and exterior of high- rise buildings. It not only connects between users and environments, but also plays an important functional and aesthetic role. This research involves a study of skins of high-rise residential buildings using the methodology of shape grammar to find out the rules which determine the combinations of the facade patterns and analyze the patterns’ parameters using software Grasshopper. We chose a number of facades of high-rise residential buildings as source to discover the underlying rules and concepts of the generation of facade skins. This research also provides the rules that influence the composition of facade skins. The items of the facade skins, such as windows, balconies, walls, sun visors and metal grilles are treated as elements in the system of facade skins. The compositions of these elements will be categorized and described by logical rules; and the types of high-rise building facade skins will be modelled by Grasshopper. Then a variety of analyzed patterns can also be applied on other facade skins through this parametric mechanism. Using these patterns established in the models, researchers can analyze each single item to do more detail tests and architects can apply each of these items to construct their facades for other buildings through various combinations and permutations. The goal of these models is to develop a mechanism to generate prototypes in order to facilitate generation of various facade skins.

Keywords: facade skin, grasshopper, high-rise residential building, shape grammar

Procedia PDF Downloads 496
1997 Analysis of Computer Science Papers Conducted by Board of Intermediate and Secondary Education at Secondary Level

Authors: Ameema Mahroof, Muhammad Saeed

Abstract:

The purpose of this study was to analyze the papers of computer science conducted by Board of Intermediate and Secondary Education with reference to Bloom’s taxonomy. The present study has two parts. First, the analysis is done on the papers conducted by Board of Intermediate of Secondary Education on the basis of basic rules of item construction especially Bloom’s (1956). And the item analysis is done to improve the psychometric properties of a test. The sample included the question papers of computer science of higher secondary classes (XI-XII) for the years 2011 and 2012. For item analysis, the data was collected from 60 students through convenient sampling. Findings of the study revealed that in the papers by Board of intermediate and secondary education the maximum focus was on knowledge and understanding level and very less focus was on the application, analysis, and synthesis. Furthermore, the item analysis on the question paper reveals that item difficulty of most of the questions did not show a balanced paper, the items were either very difficult while most of the items were too easy (measuring knowledge and understanding abilities). Likewise, most of the items were not truly discriminating the high and low achievers; four items were even negatively discriminating. The researchers also analyzed the items of the paper through software Conquest. These results show that the papers conducted by Board of Intermediate and Secondary Education were not well constructed. It was recommended that paper setters should be trained in developing the question papers that can measure various cognitive abilities of students so that a good paper in computer science should assess all cognitive abilities of students.

Keywords: Bloom’s taxonomy, question paper, item analysis, cognitive domain, computer science

Procedia PDF Downloads 136
1996 Modeling of Bipolar Charge Transport through Nanocomposite Films for Energy Storage

Authors: Meng H. Lean, Wei-Ping L. Chu

Abstract:

The effects of ferroelectric nanofiller size, shape, loading, and polarization, on bipolar charge injection, transport, and recombination through amorphous and semicrystalline polymers are studied. A 3D particle-in-cell model extends the classical electrical double layer representation to treat ferroelectric nanoparticles. Metal-polymer charge injection assumes Schottky emission and Fowler-Nordheim tunneling, migration through field-dependent Poole-Frenkel mobility, and recombination with Monte Carlo selection based on collision probability. A boundary integral equation method is used for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit. Trajectories for charge that make it through the film are curvilinear paths that meander through the interspaces. Results indicate that charge transport behavior depends on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and lowest level of charge trapping in the interaction zone. Simulation prediction of a size range of 80 to 100 nm to minimize attachment and maximize conduction is validated by theory. Attached charge fractions go from 2.2% to 97% as nanofiller size is decreased from 150 nm to 60 nm. Computed conductivity of 0.4 x 1014 S/cm is in agreement with published data for plastics. Charge attachment is increased with spheroids due to the increase in surface area, and especially so for oblate spheroids showing the influence of larger cross-sections. Charge attachment to nanofillers and nanocrystallites increase with vol.% loading or degree of crystallinity, and saturate at about 40 vol.%.

Keywords: nanocomposites, nanofillers, electrical double layer, bipolar charge transport

Procedia PDF Downloads 337
1995 Photoelastic Analysis and Finite Elements Analysis of a Stress Field Developed in a Double Edge Notched Specimen

Authors: A. Bilek, M. Beldi, T. Cherfi, S. Djebali, S. Larbi

Abstract:

Finite elements analysis and photoelasticity are used to determine the stress field developed in a double edge notched specimen loaded in tension. The specimen is cut in a birefringent plate. Experimental isochromatic fringes are obtained with circularly polarized light on the analyzer of a regular polariscope. The fringes represent the loci of points of equal maximum shear stress. In order to obtain the stress values corresponding to the fringe orders recorded in the notched specimen, particularly in the neighborhood of the notches, a calibrating disc made of the same material is loaded in compression along its diameter in order to determine the photoelastic fringe value. This fringe value is also used in the finite elements solution in order to obtain the simulated photoelastic fringes, the isochromatics as well as the isoclinics. A color scale is used by the software to represent the simulated fringes on the whole model. The stress concentration factor can be readily obtained at the notches. Good agreements are obtained between the experimental and the simulated fringe patterns and between the graphs of the shear stress particularly in the neighborhood of the notches. The purpose in this paper is to show that one can obtain rapidly and accurately, by the finite element analysis, the isochromatic and the isoclinic fringe patterns in a stressed model as the experimental procedure can be time consuming. Stress fields can therefore be analyzed in three dimensional models as long as the meshing and the limit conditions are properly set in the program.

Keywords: isochromatic fringe, isoclinic fringe, photoelasticity, stress concentration factor

Procedia PDF Downloads 217
1994 Flow Field Analysis of Different Intake Bump (Compression Surface) Configurations on a Supersonic Aircraft

Authors: Mudassir Ghafoor, Irsalan Arif, Shuaib Salamat

Abstract:

This paper presents modeling and analysis of different intake bump (compression surface) configurations and comparison with an existing supersonic aircraft having bump intake configuration. Many successful aircraft models have shown that Diverter less Supersonic Inlet (DSI) as compared to conventional intake can reduce weight, complexity and also maintenance cost. The research is divided into two parts. In the first part, four different intake bumps are modeled for comparative analysis keeping in view the consistency of outer perimeter dimensions of fighter aircraft and various characteristics such as flow behavior, boundary layer diversion and pressure recovery are analyzed. In the second part, modeled bumps are integrated with intake duct for performance analysis and comparison with existing supersonic aircraft data is carried out. The bumps are named as uniform large (Config 1), uniform small (Config 2), uniform sharp (Config 3), non-uniform (Config 4) based on their geometric features. Analysis is carried out at different Mach Numbers to analyze flow behavior in subsonic and supersonic regime. Flow behavior, boundary layer diversion and Pressure recovery are examined for each bump characteristics, and comparative study is carried out. The analysis reveals that at subsonic speed, Config 1 and Config 2 give similar pressure recoveries as diverterless supersonic intake, but difference in pressure recoveries becomes significant at supersonic speed. It was concluded from research that Config 1 gives better results as compared to Config 3. Also, higher amplitude (Config 1) is preferred over lower (Config 2 and 4). It was observed that maximum height of bump is preferred to be placed near cowl lip of intake duct.

Keywords: bump intake, boundary layer, computational fluid dynamics, diverter-less supersonic inlet

Procedia PDF Downloads 232
1993 Mental Contrasting with Implementation Intentions: A Metacognitive Strategy on Educational Context

Authors: Paula Paulino, Alzira Matias, Ana Margarida Veiga Simão

Abstract:

Self-regulated learning (SRL) directs students in analyzing proposed tasks, setting goals and designing plans to achieve those goals. The literature has suggested a metacognitive strategy for goal attainment known as Mental Contrasting with Implementation Intentions (MCII). This strategy involves Mental Contrasting (MC), in which a significant goal and an obstacle are identified, and Implementation Intentions (II), in which an "if... then…" plan is conceived and operationalized to overcome that obstacle. The present study proposes to assess the MCII process and whether it promotes students’ commitment towards learning goals during school tasks in sciences subjects. In this investigation, we intended to study the MCII strategy in a systemic context of the classroom. Fifty-six students from middle school and secondary education attending a public school in Lisbon (Portugal) participated in the study. The MCII strategy was explicitly taught in a procedure that included metacognitive modeling, guided practice and autonomous practice of strategy. A mental contrast between a goal they wanted to achieve and a possible obstacle to achieving that desire was instructed, and then the formulation of plans in order to overcome the obstacle identified previously. The preliminary results suggest that the MCII metacognitive strategy, applied to the school context, leads to more sophisticated reflections, the promotion of learning goals and the elaboration of more complex and specific self-regulated plans. Further, students achieve better results on school tests and worksheets after strategy practice. This study presents important implications since the MCII has been related to improved outcomes and increased attendance. Additionally, MCII seems to be an innovative process that captures students’ efforts to learn and enhances self-efficacy beliefs during learning tasks.

Keywords: implementation intentions, learning goals, mental contrasting, metacognitive strategy, self-regulated learning

Procedia PDF Downloads 223
1992 Automated Natural Hazard Zonation System with Internet-SMS Warning: Distributed GIS for Sustainable Societies Creating Schema and Interface for Mapping and Communication

Authors: Devanjan Bhattacharya, Jitka Komarkova

Abstract:

The research describes the implementation of a novel and stand-alone system for dynamic hazard warning. The system uses all existing infrastructure already in place like mobile networks, a laptop/PC and the small installation software. The geospatial dataset are the maps of a region which are again frugal. Hence there is no need to invest and it reaches everyone with a mobile. A novel architecture of hazard assessment and warning introduced where major technologies in ICT interfaced to give a unique WebGIS based dynamic real time geohazard warning communication system. A never before architecture introduced for integrating WebGIS with telecommunication technology. Existing technologies interfaced in a novel architectural design to address a neglected domain in a way never done before–through dynamically updatable WebGIS based warning communication. The work publishes new architecture and novelty in addressing hazard warning techniques in sustainable way and user friendly manner. Coupling of hazard zonation and hazard warning procedures into a single system has been shown. Generalized architecture for deciphering a range of geo-hazards has been developed. Hence the developmental work presented here can be summarized as the development of internet-SMS based automated geo-hazard warning communication system; integrating a warning communication system with a hazard evaluation system; interfacing different open-source technologies towards design and development of a warning system; modularization of different technologies towards development of a warning communication system; automated data creation, transformation and dissemination over different interfaces. The architecture of the developed warning system has been functionally automated as well as generalized enough that can be used for any hazard and setup requirement has been kept to a minimum.

Keywords: geospatial, web-based GIS, geohazard, warning system

Procedia PDF Downloads 389
1991 The Relationship Between Cyberbullying Victimization, Parent and Peer Attachment and Unconditional Self-Acceptance

Authors: Florina Magdalena Anichitoae, Anca Dobrean, Ionut Stelian Florean

Abstract:

Due to the fact that cyberbullying victimization is an increasing problem nowadays, affecting more and more children and adolescents around the world, we wanted to take a step forward analyzing this phenomenon. So, we took a look at some variables which haven't been studied together before, trying to develop another way to view cyberbullying victimization. We wanted to test the effects of the mother, father, and peer attachment on adolescent involvement in cyberbullying as victims through unconditional self acceptance. Furthermore, we analyzed each subscale of the IPPA-R, the instrument we have used for parents and peer attachment measurement, in regards to cyberbullying victimization through unconditional self acceptance. We have also analyzed if gender and age could be taken into consideration as moderators in this model. The analysis has been performed on 653 adolescents aged 11-17 years old from Romania. We used structural equation modeling, working in R program. For the fidelity analysis of the IPPA-R subscales, USAQ, and Cyberbullying Test, we have calculated the internal consistency index, which varies between .68-.91. We have created 2 models: the first model including peer alienation, peer trust, peer communication, self acceptance and cyberbullying victimization, having CFI=0.97, RMSEA=0.02, 90%CI [0.02, 0.03] and SRMR=0.07, and the second model including parental alienation, parental trust, parental communication, self acceptance and cyberbullying victimization and had CFI=0.97, RMSEA=0.02, 90%CI [0.02, 0.03] and SRMR=0.07. Our results were interesting: on one hand, cyberbullying victimization is predicted by peer alienation and peer communication through unconditional self acceptance. Peer trust directly, significantly, and negatively predicted the implication in cyberbullying. In this regard, considering gender and age as moderators, we found that the relationship between unconditional self acceptance and cyberbullying victimization is stronger in girls, but age does not moderate the relationship between unconditional self acceptance and cyberbullying victimization. On the other hand, regarding the degree of cyberbullying victimization as being predicted through unconditional self acceptance by parental alienation, parental communication, and parental trust, this hypothesis was not supported. Still, we could identify a direct path to positively predict victimization through parental alienation and negatively through parental trust. There are also some limitations to this study, which we've discussed in the end.

Keywords: adolescent, attachment, cyberbullying victimization, parents, peers, unconditional self-acceptance

Procedia PDF Downloads 191
1990 Urban Change Detection and Pattern Analysis Using Satellite Data

Authors: Shivani Jha, Klaus Baier, Rafiq Azzam, Ramakar Jha

Abstract:

In India, generally people migrate from rural area to the urban area for better infra-structural facilities, high standard of living, good job opportunities and advanced transport/communication availability. In fact, unplanned urban development due to migration of people causes seriou damage to the land use, water pollution and available water resources. In the present work, an attempt has been made to use satellite data of different years for urban change detection of Chennai metropolitan city along with pattern analysis to generate future scenario of urban development using buffer zoning in GIS environment. In the analysis, SRTM (30m) elevation data and IRS-1C satellite data for the years 1990, 2000, and 2014, are used. The flow accumulation, aspect, flow direction and slope maps developed using SRTM 30 m data are very useful for finding suitable urban locations for industrial setup and urban settlements. Normalized difference vegetation index (NDVI) and Principal Component Analysis (PCA) have been used in ERDAS imagine software for change detection in land use of Chennai metropolitan city. It has been observed that the urban area has increased exponentially in Chennai metropolitan city with significant decrease in agriculture and barren lands. However, the water bodies located in the study regions are protected and being used as freshwater for drinking purposes. Using buffer zone analysis in GIS environment, it has been observed that the development has taken place in south west direction significantly and will do so in future.

Keywords: urban change, satellite data, the Chennai metropolis, change detection

Procedia PDF Downloads 387
1989 A BIM-Based Approach to Assess COVID-19 Risk Management Regarding Indoor Air Ventilation and Pedestrian Dynamics

Authors: T. Delval, C. Sauvage, Q. Jullien, R. Viano, T. Diallo, B. Collignan, G. Picinbono

Abstract:

In the context of the international spread of COVID-19, the Centre Scientifique et Technique du Bâtiment (CSTB) has led a joint research with the French government authorities Hauts-de-Seine department, to analyse the risk in school spaces according to their configuration, ventilation system and spatial segmentation strategy. This paper describes the main results of this joint research. A multidisciplinary team involving experts in indoor air quality/ventilation, pedestrian movements and IT domains was established to develop a COVID risk analysis tool based on Building Information Model. The work started with specific analysis on two pilot schools in order to provide for the local administration specifications to minimize the spread of the virus. Different recommendations were published to optimize/validate the use of ventilation systems and the strategy of student occupancy and student flow segmentation within the building. This COVID expertise has been digitized in order to manage a quick risk analysis on the entire building that could be used by the public administration through an easy user interface implemented in a free BIM Management software. One of the most interesting results is to enable a dynamic comparison of different ventilation system scenarios and space occupation strategy inside the BIM model. This concurrent engineering approach provides users with the optimal solution according to both ventilation and pedestrian flow expertise.

Keywords: BIM, knowledge management, system expert, risk management, indoor ventilation, pedestrian movement, integrated design

Procedia PDF Downloads 94
1988 Behavior of Common Philippine-Made Concrete Hollow Block Structures Subjected to Seismic Load Using Rigid Body Spring-Discrete Element Method

Authors: Arwin Malabanan, Carl Chester Ragudo, Jerome Tadiosa, John Dee Mangoba, Eric Augustus Tingatinga, Romeo Eliezer Longalong

Abstract:

Concrete hollow blocks (CHB) are the most commonly used masonry block for walls in residential houses, school buildings and public buildings in the Philippines. During the recent 2013 Bohol earthquake (Mw 7.2), it has been proven that CHB walls are very vulnerable to severe external action like strong ground motion. In this paper, a numerical model of CHB structures is proposed, and seismic behavior of CHB houses is presented. In modeling, the Rigid Body Spring-Discrete Element method (RBS-DEM)) is used wherein masonry blocks are discretized into rigid elements and connected by nonlinear springs at preselected contact points. The shear and normal stiffness of springs are derived from the material properties of CHB unit incorporating the grout and mortar fillings through the volumetric transformation of the dimension using material ratio. Numerical models of reinforced and unreinforced walls are first subjected to linearly-increasing in plane loading to observe the different failure mechanisms. These wall models are then assembled to form typical model masonry houses and then subjected to the El Centro and Pacoima earthquake records. Numerical simulations show that the elastic, failure and collapse behavior of the model houses agree well with shaking table tests results. The effectiveness of the method in replicating failure patterns will serve as a basis for the improvement of the design and provides a good basis of strengthening the structure.

Keywords: concrete hollow blocks, discrete element method, earthquake, rigid body spring model

Procedia PDF Downloads 347
1987 Flood Hazard Impact Based on Simulation Model of Potential Flood Inundation in Lamong River, Gresik Regency

Authors: Yunita Ratih Wijayanti, Dwi Rahmawati, Turniningtyas Ayu Rahmawati

Abstract:

Gresik is one of the districts in East Java Province, Indonesia. Gresik Regency has three major rivers, namely Bengawan Solo River, Brantas River, and Lamong River. Lamong River is a tributary of Bengawan Solo River. Flood disasters that occur in Gresik Regency are often caused by the overflow of the Lamong River. The losses caused by the flood were very large and certainly detrimental to the affected people. Therefore, to be able to minimize the impact caused by the flood, it is necessary to take preventive action. However, before taking preventive action, it is necessary to have information regarding potential inundation areas and water levels at various points. For this reason, a flood simulation model is needed. In this study, the simulation was carried out using the Geographic Information System (GIS) method with the help of Global Mapper software. The approach used in this simulation is to use a topographical approach with Digital Elevation Models (DEMs) data. DEMs data have been widely used for various researches to analyze hydrology. The results obtained from this flood simulation are the distribution of flood inundation and water level. The location of the inundation serves to determine the extent of the flooding that occurs by referring to the 50-100 year flood plan, while the water level serves to provide early warning information. Both will be very useful to find out how much loss will be caused in the future due to flooding in Gresik Regency so that the Gresik Regency Regional Disaster Management Agency can take precautions before the flood disaster strikes.

Keywords: flood hazard, simulation model, potential inundation, global mapper, Gresik Regency

Procedia PDF Downloads 73
1986 Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV

Authors: Maria Pavlova

Abstract:

In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems.

Keywords: camera, object recognition, OpenCV, Raspberry

Procedia PDF Downloads 209
1985 Investigating the Effect of Brand Equity on Competitive Advantage in the Banking Industry

Authors: Rohollah Asadian Kohestani, Nazanin Sedghi

Abstract:

As the number of banks and financial institutions working in Iran has been significantly increased, the attracting and retaining customers and encouraging them to continually use the modern banking services have been important and vital issues. Therefore, there would be a serious competition without a deep perception of consumers and fitness of banking services with their needs in the current economic conditions of Iran. It should be noted that concepts such as 'brand equity' is defined based on the view of consumers; however, it is also focused by shareholders, competitors and other beneficiaries of a firm in addition to bank and its consumers. This study examines the impact of brand equity on the competitive advantage in the banking industry as intensive competition between brands of different banks leads to pay more attention to the brands. This research is based on the Aaker’s model examining the impact of four dimensions of brand equity on the competitive advantage of private banks in Behshahr city. Moreover, conducting an applied research and data analysis has been carried out by a descriptive method. Data collection was done using literature review and questionnaire. A 'simple random' methodology was selected for sampling staff of banks while sampling methodology to select consumers of banks was the distribution of questionnaire between staff and consumers of five private banks including Tejarat, Mellat, Refah K., Ghavamin and, Tose’e Ta’avon banks. Results show that there is a significant relationship between brand equity and their competitive advantage. In this research, software of SPSS 16 and LISREL 8.5, as well as different methods of descriptive inferential statistics for analyzing data and test hypotheses, were employed.

Keywords: brand awareness, brand loyalty, brand equity, competitive advantage

Procedia PDF Downloads 121
1984 Agent-Based Modelling to Improve Dairy-origin Beef Production: Model Description and Evaluation

Authors: Addisu H. Addis, Hugh T. Blair, Paul R. Kenyon, Stephen T. Morris, Nicola M. Schreurs, Dorian J. Garrick

Abstract:

Agent-based modeling (ABM) enables an in silico representation of complex systems and cap-tures agent behavior resulting from interaction with other agents and their environment. This study developed an ABM to represent a pasture-based beef cattle finishing systems in New Zea-land (NZ) using attributes of the rearer, finisher, and processor, as well as specific attributes of dairy-origin beef cattle. The model was parameterized using values representing 1% of NZ dairy-origin cattle, and 10% of rearers and finishers in NZ. The cattle agent consisted of 32% Holstein-Friesian, 50% Holstein-Friesian–Jersey crossbred, and 8% Jersey, with the remainder being other breeds. Rearers and finishers repetitively and simultaneously interacted to determine the type and number of cattle populating the finishing system. Rearers brought in four-day-old spring-born calves and reared them until 60 calves (representing a full truck load) on average had a live weight of 100 kg before selling them on to finishers. Finishers mainly attained weaners from rearers, or directly from dairy farmers when weaner demand was higher than the supply from rearers. Fast-growing cattle were sent for slaughter before the second winter, and the re-mainder were sent before their third winter. The model finished a higher number of bulls than heifers and steers, although it was 4% lower than the industry reported value. Holstein-Friesian and Holstein-Friesian–Jersey-crossbred cattle dominated the dairy-origin beef finishing system. Jersey cattle account for less than 5% of total processed beef cattle. Further studies to include re-tailer and consumer perspectives and other decision alternatives for finishing farms would im-prove the applicability of the model for decision-making processes.

Keywords: agent-based modelling, dairy cattle, beef finishing, rearers, finishers

Procedia PDF Downloads 81
1983 Accurately Measuring Stress Using Latest Breathing Technology and Its Relationship with Academic Performance

Authors: Farshid Marbouti, Jale Ulas, Julia Thompson

Abstract:

The main sources of stress among college students are: changes in sleeping and eating habits, undertaking new responsibilities, and financial difficulties as the most common sources of stress, exams, meeting new people, career decisions, fear of failure, and pressure from parents, transition to university especially if it requires leaving home, working with people that they do not know, trouble with parents, and relationship with the opposite sex. The students use a variety of stress coping strategies, including talking to family and friends, leisure activities and exercising. The Yerkes–Dodson law indicates while a moderate amount of stress may be beneficial for performance, too high stress will result in weak performance. In other words, if students are too stressed, they are likely to have low academic performance. In a preliminary study conducted in 2017 with engineering students enrolled in three high failure rate classes, the majority of the students stated that they have high levels of stress mainly for academic, financial, or family-related reasons. As the second stage of the study, the main purpose of this research is to investigate the students’ level of stress, sources of stress, their relationship with student demographic background, students’ coping strategies, and academic performance. A device is being developed to gather data from students breathing patterns and measure their stress levels. In addition, all participants are asked to fill out a survey. The survey under development has the following categories: exam stressor, study-related stressors, financial pressures, transition to university, family-related stress, student response to stress, and stress management. After the data collection, Structural Equation Modeling (SEM) analysis will be conducted in order to identify the relationship among students’ level of stress, coping strategies, and academic performance.

Keywords: college student stress, coping strategies, academic performance, measuring stress

Procedia PDF Downloads 93
1982 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 142
1981 Evaluation of Virtual Reality for the Rehabilitation of Athlete Lower Limb Musculoskeletal Injury: A Method for Obtaining Practitioner’s Viewpoints through Observation and Interview

Authors: Hannah K. M. Tang, Muhammad Ateeq, Mark J. Lake, Badr Abdullah, Frederic A. Bezombes

Abstract:

Based on a theoretical assessment of current literature, virtual reality (VR) could help to treat sporting injuries in a number of ways. However, it is important to obtain rehabilitation specialists’ perspectives in order to design, develop and validate suitable content for a VR application focused on treatment. Subsequently, a one-day observation and interview study focused on the use of VR for the treatment of lower limb musculoskeletal conditions in athletes was conducted at St George’s Park England National Football Centre with rehabilitation specialists. The current paper established the methods suitable for obtaining practitioner’s viewpoints through observation and interview in this context. Particular detail was provided regarding the method of qualitatively processing interview results using the qualitative data analysis software tool NVivo, in order to produce a narrative of overarching themes. The observations and overarching themes identified could be used as a framework and success criteria of a VR application developed in future research. In conclusion, this work explained the methods deemed suitable for obtaining practitioner’s viewpoints through observation and interview. This was required in order to highlight characteristics and features of a VR application designed to treat lower limb musculoskeletal injury of athletes and could be built upon to direct future work.

Keywords: athletes, lower-limb musculoskeletal injury, rehabilitation, return-to-sport, virtual reality

Procedia PDF Downloads 238
1980 Finite Element Modeling of a Lower Limb Based on the East Asian Body Characteristics for Pedestrian Protection

Authors: Xianping Du, Runlu Miao, Guanjun Zhang, Libo Cao, Feng Zhu

Abstract:

Current vehicle safety standards and human body injury criteria were established based on the biomechanical response of Euro-American human body, without considering the difference in the body anthropometry and injury characteristics among different races, particularly the East Asian people with smaller body size. Absence of such race specific design considerations will negatively influence the protective performance of safety products for these populations, and weaken the accuracy of injury thresholds derived. To resolve these issues, in this study, we aim to develop a race specific finite element model to simulate the impact response of the lower extremity of a 50th percentile East Asian (Chinese) male. The model was built based on medical images for the leg of an average size Chinese male and slightly adjusted based on the statistical data. The model includes detailed anatomic features and is able to simulate the muscle active force. Thirteen biomechanical tests available in the literature were used to validate its biofidelity. Using the validated model, a pedestrian-car impact accident taking place in China was re-constructed computationally. The results show that the newly developed lower leg model has a good performance in predicting dynamic response and tibia fracture pattern. An additional comparison on the fracture tolerance of the East Asian and Euro-American lower limb suggests that the current injury criterion underestimates the degree of injury of East Asian human body.

Keywords: lower limb, East Asian body characteristics, traffic accident reconstruction, finite element analysis, injury tolerance

Procedia PDF Downloads 276
1979 Organotin (IV) Based Complexes as Promiscuous Antibacterials: Synthesis in vitro, in Silico Pharmacokinetic, and Docking Studies

Authors: Wajid Rehman, Sirajul Haq, Bakhtiar Muhammad, Syed Fahad Hassan, Amin Badshah, Muhammad Waseem, Fazal Rahim, Obaid-Ur-Rahman Abid, Farzana Latif Ansari, Umer Rashid

Abstract:

Five novel triorganotin (IV) compounds have been synthesized and characterized. The tin atom is penta-coordinated to assume trigonal-bipyramidal geometry. Using in silico derived parameters; the objective of our study is to design and synthesize promiscuous antibacterials potent enough to combat resistance. Among various synthesized organotin (IV) complexes, compound 5 was found as potent antibacterial agent against various bacterial strains. Further lead optimization of drug-like properties was evaluated through in silico predictions. Data mining and computational analysis were utilized to derive compound promiscuity phenomenon to avoid drug attrition rate in designing antibacterials. Xanthine oxidase and human glucose- 6-phosphatase were found as only true positive off-target hits by ChEMBL database and others utilizing similarity ensemble approach. Propensity towards a-3 receptor, human macrophage migration factor and thiazolidinedione were found as false positive off targets with E-value 1/4> 10^-4 for compound 1, 3, and 4. Further, displaying positive drug-drug interaction of compound 1 as uricosuric was validated by all databases and docked protein targets with sequence similarity and compositional matrix alignment via BLAST software. Promiscuity of the compound 5 was further confirmed by in silico binding to different antibacterial targets.

Keywords: antibacterial activity, drug promiscuity, ADMET prediction, metallo-pharmaceutical, antimicrobial resistance

Procedia PDF Downloads 491
1978 Contribution of Upper Body Kinematics on Tennis Serve Performance

Authors: Ikram Hussain, Fuzail Ahmad, Tawseef Ahmad Bhat

Abstract:

Tennis serve is characterized as one of the most prominent techniques pertaining to the success of winning a point. The study was aimed to explore the contributions of the upper body kinematics on the tennis performance during Davis Cup (Oceania Group). Four Indian International tennis players who participated in the Davis Cup held at Indore, India were inducted as the subjects for this study, with mean age 27 ± 4.79 Years, mean weight 186 ± 6.03 cm, mean weight 81.25 ± 7.41kg, respectively. The tennis serve was bifurcated into three phases viz, preparatory phase, force generation phase and follow through phase. The kinematic data for the study was recorded through the high speed canon camcorder having a shuttle speed of 1/2000, at a frame rate of 50 Hz. The data was analysed with the motion analysis software. The descriptive statistics and F-test was employed through SPSS version 17.0 for the determination of the undertaken kinematic parameters of the study, and was computed at a 0.05 level of significance with 46 degrees of freedom. Mean, standard deviation and correlation coefficient also employed to find out the relationship among the upper body kinematic parameter and performance. In the preparatory phase, the analysis revealed that no significant difference exists among the kinematic parameters of the players on the performance. However, in force generation phase, wrist velocity (r= 0.47), torso velocity (r= -0.53), racket velocity r= 0.60), and in follow through phase, torso acceleration r= 0.43), elbow angle (r= -0.48) play a significant role on the performance of the tennis serve. Therefore, players should ponder upon the velocities of the above segments at the time of preparation for the competitions.

Keywords: Davis Cup, kinematics, motion analysis, tennis serve

Procedia PDF Downloads 287