Search results for: normalized compression distance
282 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 124281 A webGIS Methodology to Support Sediments Management in Wallonia
Authors: Nathalie Stephenne, Mathieu Veschkens, Stéphane Palm, Christophe Charlemagne, Jacques Defoux
Abstract:
According to Europe’s first River basin Management Plans (RBMPs), 56% of European rivers failed to achieve the good status targets of the Water Framework Directive WFD. In Central European countries such as Belgium, even more than 80% of rivers failed to achieve the WFD quality targets. Although the RBMP’s should reduce the stressors and improve water body status, their potential to address multiple stress situations is limited due to insufficient knowledge on combined effects, multi-stress, prioritization of measures, impact on ecology and implementation effects. This paper describes a webGis prototype developed for the Walloon administration to improve the communication and the management of sediment dredging actions carried out in rivers and lakes in the frame of RBMPs. A large number of stakeholders are involved in the management of rivers and lakes in Wallonia. They are in charge of technical aspects (client and dredging operators, organizations involved in the treatment of waste…), management (managers involved in WFD implementation at communal, provincial or regional level) or policy making (people responsible for policy compliance or legislation revision). These different kinds of stakeholders need different information and data to cover their duties but have to interact closely at different levels. Moreover, information has to be shared between them to improve the management quality of dredging operations within the ecological system. In the Walloon legislation, leveling dredged sediments on banks requires an official authorization from the administration. This request refers to spatial information such as the official land use map, the cadastral map, the distance to potential pollution sources. The production of a collective geodatabase can facilitate the management of these authorizations from both sides. The proposed internet system integrates documents, data input, integration of data from disparate sources, map representation, database queries, analysis of monitoring data, presentation of results and cartographic visualization. A prototype of web application using the API geoviewer chosen by the Geomatic department of the SPW has been developed and discussed with some potential users to facilitate the communication, the management and the quality of the data. The structure of the paper states the why, what, who and how of this communication tool.Keywords: sediments, web application, GIS, rivers management
Procedia PDF Downloads 405280 Covid-19 Lockdown Experience of Elderly Female as Reflected in Their Artwork
Authors: Liat Shamri-Zeevi, Neta Ram-Vlasov
Abstract:
Today the world as a whole is attempting to cope with the COVID-19, which has affected all facets of personal and social life from country-wide confinement to maintaining social distance and taking protective measures to maintain hygiene. One of the populations faced with the most severe restrictions is seniors. Various studies have shown that creativity plays a crucial role in dealing with crisis events. Painting - regardless of media - allows for emotional and cognitive processing of these situations, and enables the expression of experiences in a tangible creative way that conveys and endows meaning to the artwork. The current study was conducted in Israel immediately after a 6-week lockdown. It was designed to specifically examine the impact of the COVID-19 pandemic on the quality of life of elderly women as reflected in their artworks. The sample was composed of 21 Israeli women aged 60-90, in good mental health (without diagnosed dementia or Alzheimer's), all of whom were Hebrew-speaking, and retired with an extended family, who indicated that they painted and had engaged in artwork on an ongoing basis throughout the lockdown (from March 12 to May 30, 2020). The participants' artworks were collected, and a semi-structured in-depth interview was conducted that lasted one to two hours. The participants were asked about their feelings during the pandemic and the artworks they produced during this time, and completed a questionnaire on well-being and mental health. The initial analysis of the interviews and artworks revealed themes related to the specific role of each piece of artwork. The first theme included notions that the artwork was an activity and a framework for doing, which supported positive emotions, and provided a sense of vitality during the closure. Most of the participants painted images of nature and growth which were ascribed concrete and symbolic meaning. The second theme was that the artwork enabled the processing of difficult and /or conflicting emotions related to the situation, including anxiety about death and loneliness that were symbolically expressed in the artworks, such as images of the Corona virus and the respiratory machines. The third theme suggested that the time and space prompted by the lockdown gave the participants time for a gathering together of the self, and freed up time for creative activities. Many participants stated that they painted more and more frequently during the Corona lockdown. At the conference, additional themes and findings will be presented.Keywords: Corona virus, artwork, quality of life of elderly
Procedia PDF Downloads 144279 Digital Image Correlation: Metrological Characterization in Mechanical Analysis
Authors: D. Signore, M. Ferraiuolo, P. Caramuta, O. Petrella, C. Toscano
Abstract:
The Digital Image Correlation (DIC) is a newly developed optical technique that is spreading in all engineering sectors because it allows the non-destructive estimation of the entire surface deformation without any contact with the component under analysis. These characteristics make the DIC very appealing in all the cases the global deformation state is to be known without using strain gages, which are the most used measuring device. The DIC is applicable to any material subjected to distortion caused by either thermal or mechanical load, allowing to obtain high-definition mapping of displacements and deformations. That is why in the civil and the transportation industry, DIC is very useful for studying the behavior of metallic materials as well as of composite materials. DIC is also used in the medical field for the characterization of the local strain field of the vascular tissues surface subjected to uniaxial tensile loading. DIC can be carried out in the two dimension mode (2D DIC) if a single camera is used or in a three dimension mode (3D DIC) if two cameras are involved. Each point of the test surface framed by the cameras can be associated with a specific pixel of the image, and the coordinates of each point are calculated knowing the relative distance between the two cameras together with their orientation. In both arrangements, when a component is subjected to a load, several images related to different deformation states can be are acquired through the cameras. A specific software analyzes the images via the mutual correlation between the reference image (obtained without any applied load) and those acquired during the deformation giving the relative displacements. In this paper, a metrological characterization of the digital image correlation is performed on aluminum and composite targets both in static and dynamic loading conditions by comparison between DIC and strain gauges measures. In the static test, interesting results have been obtained thanks to an excellent agreement between the two measuring techniques. In addition, the deformation detected by the DIC is compliant with the result of a FEM simulation. In the dynamic test, the DIC was able to follow with a good accuracy the periodic deformation of the specimen giving results coherent with the ones given by FEM simulation. In both situations, it was seen that the DIC measurement accuracy depends on several parameters such as the optical focusing, the parameters chosen to perform the mutual correlation between the images and, finally, the reference points on image to be analyzed. In the future, the influence of these parameters will be studied, and a method to increase the accuracy of the measurements will be developed in accordance with the requirements of the industries especially of the aerospace one.Keywords: accuracy, deformation, image correlation, mechanical analysis
Procedia PDF Downloads 311278 Spatial Mapping of Variations in Groundwater of Taluka Islamkot Thar Using GIS and Field Data
Authors: Imran Aziz Tunio
Abstract:
Islamkot is an underdeveloped sub-district (Taluka) in the Tharparkar district Sindh province of Pakistan located between latitude 24°25'19.79"N to 24°47'59.92"N and longitude 70° 1'13.95"E to 70°32'15.11"E. The Islamkot has an arid desert climate and the region is generally devoid of perennial rivers, canals, and streams. It is highly dependent on rainfall which is not considered a reliable surface water source and groundwater is the only key source of water for many centuries. To assess groundwater’s potential, an electrical resistivity survey (ERS) was conducted in Islamkot Taluka. Groundwater investigations for 128 Vertical Electrical Sounding (VES) were collected to determine the groundwater potential and obtain qualitatively and quantitatively layered resistivity parameters. The PASI Model 16 GL-N Resistivity Meter was used by employing a Schlumberger electrode configuration, with half current electrode spacing (AB/2) ranging from 1.5 to 100 m and the potential electrode spacing (MN/2) from 0.5 to 10 m. The data was acquired with a maximum current electrode spacing of 200 m. The data processing for the delineation of dune sand aquifers involved the technique of data inversion, and the interpretation of the inversion results was aided by the use of forward modeling. The measured geo-electrical parameters were examined by Interpex IX1D software, and apparent resistivity curves and synthetic model layered parameters were mapped in the ArcGIS environment using the inverse Distance Weighting (IDW) interpolation technique. Qualitative interpretation of vertical electrical sounding (VES) data shows the number of geo-electrical layers in the area varies from three to four with different resistivity values detected. Out of 128 VES model curves, 42 nos. are 3 layered, and 86 nos. are 4 layered. The resistivity of the first subsurface layers (Loose surface sand) varied from 16.13 Ωm to 3353.3 Ωm and thickness varied from 0.046 m to 17.52m. The resistivity of the second subsurface layer (Semi-consolidated sand) varied from 1.10 Ωm to 7442.8 Ωm and thickness varied from 0.30 m to 56.27 m. The resistivity of the third subsurface layer (Consolidated sand) varied from 0.00001 Ωm to 3190.8 Ωm and thickness varied from 3.26 m to 86.66 m. The resistivity of the fourth subsurface layer (Silt and Clay) varied from 0.0013 Ωm to 16264 Ωm and thickness varied from 13.50 m to 87.68 m. The Dar Zarrouk parameters, i.e. longitudinal unit conductance S is from 0.00024 to 19.91 mho; transverse unit resistance T from 7.34 to 40080.63 Ωm2; longitudinal resistance RS is from 1.22 to 3137.10 Ωm and transverse resistivity RT from 5.84 to 3138.54 Ωm. ERS data and Dar Zarrouk parameters were mapped which revealed that the study area has groundwater potential in the subsurface.Keywords: electrical resistivity survey, GIS & RS, groundwater potential, environmental assessment, VES
Procedia PDF Downloads 110277 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model
Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho
Abstract:
Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem
Procedia PDF Downloads 297276 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 125275 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal
Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader
Abstract:
DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform
Procedia PDF Downloads 81274 Numerical Prediction of Width Crack of Concrete Dapped-End Beams
Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo
Abstract:
Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis
Procedia PDF Downloads 169273 Seismic Impact and Design on Buried Pipelines
Authors: T. Schmitt, J. Rosin, C. Butenweg
Abstract:
Seismic design of buried pipeline systems for energy and water supply is not only important for plant and operational safety, but in particular for the maintenance of supply infrastructure after an earthquake. Past earthquakes have shown the vulnerability of pipeline systems. After the Kobe earthquake in Japan in 1995 for instance, in some regions the water supply was interrupted for almost two months. The present paper shows special issues of the seismic wave impacts on buried pipelines, describes calculation methods, proposes approaches and gives calculation examples. Buried pipelines are exposed to different effects of seismic impacts. This paper regards the effects of transient displacement differences and resulting tensions within the pipeline due to the wave propagation of the earthquake. Other effects are permanent displacements due to fault rupture displacements at the surface, soil liquefaction, landslides and seismic soil compaction. The presented model can also be used to calculate fault rupture induced displacements. Based on a three-dimensional Finite Element Model parameter studies are performed to show the influence of several parameters such as incoming wave angle, wave velocity, soil depth and selected displacement time histories. In the computer model, the interaction between the pipeline and the surrounding soil is modeled with non-linear soil springs. A propagating wave is simulated affecting the pipeline punctually independently in time and space. The resulting stresses mainly are caused by displacement differences of neighboring pipeline segments and by soil-structure interaction. The calculation examples focus on pipeline bends as the most critical parts. Special attention is given to the calculation of long-distance heat pipeline systems. Here, in regular distances expansion bends are arranged to ensure movements of the pipeline due to high temperature. Such expansion bends are usually designed with small bending radii, which in the event of an earthquake lead to high bending stresses at the cross-section of the pipeline. Therefore, Karman's elasticity factors, as well as the stress intensity factors for curved pipe sections, must be taken into account. The seismic verification of the pipeline for wave propagation in the soil can be achieved by observing normative strain criteria. Finally, an interpretation of the results and recommendations are given taking into account the most critical parameters.Keywords: buried pipeline, earthquake, seismic impact, transient displacement
Procedia PDF Downloads 188272 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle
Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari
Abstract:
Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism
Procedia PDF Downloads 281271 A Geographical Spatial Analysis on the Benefits of Using Wind Energy in Kuwait
Authors: Obaid AlOtaibi, Salman Hussain
Abstract:
Wind energy is associated with many geographical factors including wind speed, climate change, surface topography, environmental impacts, and several economic factors, most notably the advancement of wind technology and energy prices. It is the fastest-growing and least economically expensive method for generating electricity. Wind energy generation is directly related to the characteristics of spatial wind. Therefore, the feasibility study for the wind energy conversion system is based on the value of the energy obtained relative to the initial investment and the cost of operation and maintenance. In Kuwait, wind energy is an appropriate choice as a source of energy generation. It can be used in groundwater extraction in agricultural areas such as Al-Abdali in the north and Al-Wafra in the south, or in fresh and brackish groundwater fields or remote and isolated locations such as border areas and projects away from conventional power electricity services, to take advantage of alternative energy, reduce pollutants, and reduce energy production costs. The study covers the State of Kuwait with an exception of metropolitan area. Climatic data were attained through the readings of eight distributed monitoring stations affiliated with Kuwait Institute for Scientific Research (KISR). The data were used to assess the daily, monthly, quarterly, and annual available wind energy accessible for utilization. The researchers applied the Suitability Model to analyze the study by using the ArcGIS program. It is a model of spatial analysis that compares more than one location based on grading weights to choose the most suitable one. The study criteria are: the average annual wind speed, land use, topography of land, distance from the main road networks, urban areas. According to the previous criteria, the four proposed locations to establish wind farm projects are selected based on the weights of the degree of suitability (excellent, good, average, and poor). The percentage of areas that represents the most suitable locations with an excellent rank (4) is 8% of Kuwait’s area. It is relatively distributed as follows: Al-Shqaya, Al-Dabdeba, Al-Salmi (5.22%), Al-Abdali (1.22%), Umm al-Hayman (0.70%), North Wafra and Al-Shaqeeq (0.86%). The study recommends to decision-makers to consider the proposed location (No.1), (Al-Shqaya, Al-Dabdaba, and Al-Salmi) as the most suitable location for future development of wind farms in Kuwait, this location is economically feasible.Keywords: Kuwait, renewable energy, spatial analysis, wind energy
Procedia PDF Downloads 151270 A Semi-Automated GIS-Based Implementation of Slope Angle Design Reconciliation Process at Debswana Jwaneng Mine, Botswana
Authors: K. Mokatse, O. M. Barei, K. Gabanakgosi, P. Matlhabaphiri
Abstract:
The mining of pit slopes is often associated with some level of deviation from design recommendations, and this may translate to associated changes in the stability of the excavated pit slopes. Therefore slope angle design reconciliations are essential for assessing and monitoring compliance of excavated pit slopes to accepted slope designs. These associated changes in slope stability may be reflected by changes in the calculated factors of safety and/or probabilities of failure. Reconciliations of as-mined and slope design profiles are conducted periodically to assess the implications of these deviations on pit slope stability. Currently, the slope design reconciliation process being implemented in Jwaneng Mine involves the measurement of as-mined and design slope angles along vertical sections cut along the established geotechnical design section lines on the GEOVIA GEMS™ software. Bench retentions are calculated as a percentage of the available catchment area, less over-mined and under-mined areas, to that of the designed catchment area. This process has proven to be both tedious and requires a lot of manual effort and time to execute. Consequently, a new semi-automated mine-to-design reconciliation approach that utilizes laser scanning and GIS-based tools is being proposed at Jwaneng Mine. This method involves high-resolution scanning of targeted bench walls, subsequent creation of 3D surfaces from point cloud data and the derivation of slope toe lines and crest lines on the Maptek I-Site Studio software. The toe lines and crest lines are then exported to the ArcGIS software where distance offsets between the design and actual bench toe lines and crest lines are calculated. Retained bench catchment capacity is measured as distances between the toe lines and crest lines on the same bench elevations. The assessment of the performance of the inter-ramp and overall slopes entails the measurement of excavated and design slope angles along vertical sections on the ArcGIS software. Excavated and design toe-to-toe or crest-to-crest slope angles are measured for inter-ramp stack slope reconciliations. Crest-to-toe slope angles are also measured for overall slope angle design reconciliations. The proposed approach allows for a more automated, accurate, quick and easier workflow for carrying out slope angle design reconciliations. This process has proved highly effective and timeous in the assessment of slope performance in Jwaneng Mine. This paper presents a newly proposed process for assessing compliance to slope angle designs for Jwaneng Mine.Keywords: slope angle designs, slope design recommendations, slope performance, slope stability
Procedia PDF Downloads 237269 A (Morpho) Phonological Typology of Demonstratives: A Case Study in Sound Symbolism
Authors: Seppo Kittilä, Sonja Dahlgren
Abstract:
In this paper, a (morpho)phonological typology of proximal and distal demonstratives is proposed. Only the most basic proximal (‘this’) and distal (‘that’) forms have been considered, potential more fine-grained distinctions based on proximity are not relevant to our discussion, nor are the other functions the discussed demonstratives may have. The sample comprises 82 languages that represent the linguistic diversity of the world’s languages, although the study is not based on a systematic sample. Four different major types are distinguished; (1) Vowel type: front vs. back; high vs. low vowel (2) Consonant type: front-back consonants (3) Additional element –type (4) Varia. The proposed types can further be subdivided according to whether the attested difference concern only, e.g., vowels, or whether there are also other changes. For example, the first type comprises both languages such as Betta Kurumba, where only the vowel changes (i ‘this’, a ‘that’) and languages like Alyawarra (nhinha vs. nhaka), where there are also other changes. In the second type, demonstratives are distinguished based on whether the consonants are front or back; typically front consonants (e.g., labial and dental) appear on proximal demonstratives and back consonants on distal demonstratives (such as velar or uvular consonants). An example is provided by Bunaq, where bari marks ‘this’ and baqi ‘that’. In the third type, distal demonstratives typically have an additional element, making it longer in form than the proximal one (e.g., Òko òne ‘this’, ònébé ‘that’), but the type also comprises languages where the distal demonstrative is simply phonologically longer (e.g., Ngalakan nu-gaʔye vs. nu-gunʔbiri). Finally, the last type comprises cases that do not fit into the three other types, but a number of strategies are used by the languages of this group. The two first types can be explained by iconicity; front or high phonemes appear on the proximal demonstratives, while back/low phonemes are related to distal demonstratives. This means that proximal demonstratives are pronounced at the front and/or high part of the oral cavity, while distal demonstratives are pronounced lower and more back, which reflects the proximal/distal nature of their referents in the physical world. The first type is clearly the most common in our data (40/82 languages), which suggests a clear association with iconicity. Our findings support earlier findings that proximal and distal demonstratives have an iconic phonemic manifestation. For example, it has been argued that /i/ is related to smallness (small distance). Consonants, however, have not been considered before, or no systematic correspondences have been discovered. The third type, in turn, can be explained by markedness; the distal element is more marked than the proximal demonstrative. Moreover, iconicity is relevant also here: some languages clearly use less linguistic substance for referring to entities close to the speaker, which is manifested in the longer (morpho)phonological form of the distal demonstratives. The fourth type contains different kinds of cases, and systematic generalizations are hard to make.Keywords: demonstratives, iconicity, language typology, phonology
Procedia PDF Downloads 155268 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets
Authors: Ece Cigdem Mutlu, Burak Alakent
Abstract:
Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.Keywords: average run length, M-estimators, quality control, robust estimators
Procedia PDF Downloads 191267 Envisioning The Future of Language Learning: Virtual Reality, Mobile Learning and Computer-Assisted Language Learning
Authors: Jasmin Cowin, Amany Alkhayat
Abstract:
This paper will concentrate on a comparative analysis of both the advantages and limitations of using digital learning resources (DLRs). DLRs covered will be Virtual Reality (VR), Mobile Learning (M-learning) and Computer-Assisted Language Learning (CALL) together with their subset, Mobile Assisted Language Learning (MALL) in language education. In addition, best practices for language teaching and the application of established language teaching methodologies such as Communicative Language Teaching (CLT), the audio-lingual method, or community language learning will be explored. Education has changed dramatically since the eruption of the pandemic. Traditional face-to-face education was disrupted on a global scale. The rise of distance learning brought new digital tools to the forefront, especially web conferencing tools, digital storytelling apps, test authoring tools, and VR platforms. Language educators raced to vet, learn, and implement multiple technology resources suited for language acquisition. Yet, questions remain on how to harness new technologies, digital tools, and their ubiquitous availability while using established methods and methodologies in language learning paired with best teaching practices. In M-learning language, learners employ portable computing devices such as smartphones or tablets. CALL is a language teaching approach using computers and other technologies through presenting, reinforcing, and assessing language materials to be learned or to create environments where teachers and learners can meaningfully interact. In VR, a computer-generated simulation enables learner interaction with a 3D environment via screen, smartphone, or a head mounted display. Research supports that VR for language learning is effective in terms of exploration, communication, engagement, and motivation. Students are able to relate through role play activities, interact with 3D objects and activities such as field trips. VR lends itself to group language exercises in the classroom with target language practice in an immersive, virtual environment. Students, teachers, schools, language institutes, and institutions benefit from specialized support to help them acquire second language proficiency and content knowledge that builds on their cultural and linguistic assets. Through the purposeful application of different language methodologies and teaching approaches, language learners can not only make cultural and linguistic connections in DLRs but also practice grammar drills, play memory games or flourish in authentic settings.Keywords: language teaching methodologies, computer-assisted language learning, mobile learning, virtual reality
Procedia PDF Downloads 241266 The Effects of Geographical and Functional Diversity of Collaborators on Quality of Knowledge Generated
Authors: Ajay Das, Sandip Basu
Abstract:
Introduction: There is increasing recognition that diverse streams of knowledge can often be recombined in novel ways to generate new knowledge. However, knowledge recombination theory has not been applied to examine the effects of collaborator diversity on the quality of knowledge such collaborators produce. This is surprising because one would expect that a collaborative team with certain aspects of diversity should be able to recombine process elements related to knowledge development, which are relatively tacit, but also complementary because of the collaborator’s varying backgrounds. Theory and Hypotheses: We propose to examine two aspects of diversity in the environments of collaborative teams to try and capture such potential recombinations of relatively tacit, process knowledge. The first aspect of diversity in team members’ environments is geographical. Collaborators with more geographical distance between them (perhaps working in different countries) often have more autonomy in the processes they adopt for knowledge development. In the absence of overt monitoring, such collaborators are likely to adopt differing approaches to knowledge development. The sharing of such varying approaches among collaborators is likely to result in greater quality of the common collaborative pursuit. The second aspect is diversity in the work backgrounds of team members. Such diversity can also increase the potential for knowledge recombination. For example, if one or more members are from a manufacturing center (versus all of them being from a purely R&D center), such members will provide unique perspectives on the implementation of innovative ideas. Again, knowledge that has been evaluated from these diverse perspectives is likely to be of a higher quality. In addition to the above aspects of environmental diversity among team members, we also plan to examine the extent to which individual collaborators are in different environments from the primary innovation center of their employing firms. Proposed Methods: We will test our model on a sample of firms in the semiconductor industry. Our level of analysis will be individual patents generated by these firms and the teams involved in the generation of these. Information on manufacturing activities of our sample firms will be obtained from SEMI, a proprietary database of the semiconductor industry, as well as company 10-K reports. Conclusion: We believe that our results will represent a preliminary attempt to understand how various forms of diversity in collaborative teams impact the knowledge development process. Our dependent variable of knowledge quality is important to study since higher values of this variable can not only drive firm performance but the broader development of regions and societies through spillover impacts on future innovation. The results of this study will, therefore, inform future research and practice in innovation, geographical location, and vertical integration.Keywords: innovation, manufacturing strategy, knowledge, diversity
Procedia PDF Downloads 353265 Transverse Behavior of Frictional Flat Belt Driven by Tapered Pulley -Change of Transverse Force Under Driving State–
Authors: Satoko Fujiwara, Kiyotaka Obunai, Kazuya Okubo
Abstract:
A skew is one of important problems for designing the conveyor and transmission with frictional flat belt, in which running belt is deviated in width direction due to the transverse force applied to the belt. The skew often not only degrades the stability of the path of belt but also causes some damages of the belt and auxiliary machines. However, the transverse behavior such as the skew has not been discussed quantitatively in detail for frictional belts. The objective of this study is to clarify the transverse behavior of frictional flat belt driven by tapered pulley. Commercially available rubber flat belt reinforced by polyamide film was prepared as the test belt where the thickness and length were 1.25 mm and 630 mm, respectively. Test belt was driven between two pulleys made of aluminum alloy, where diameter and inter-axial length were 50 mm and 150 mm, respectively. Some tapered pulleys were applied where tapered angles were 0 deg (for comparison), 2 deg, 4 deg, and 6 deg. In order to alternatively investigate the transverse behavior, the transverse force applied to the belt was measured when the skew was constrained at the string under driving state. The transverse force was measured by a load cell having free rollers contacting on the side surface of the belt when the displacement in the belt width direction was constrained. The conditions of observed bending stiffness in-plane of the belt were changed by preparing three types of belts (the width of the belt was 20, 30, and 40 mm) where their observed stiffnesses were changed. The contributions of the bending stiffness in-plane of belt and initial inter-axial force to the transverse were discussed in experiments. The inter-axial force was also changed by setting a distance (about 240 mm) between the two pulleys. Influence of observed bending stiffness in-plane of the belt and initial inter-axial force on the transverse force were investigated. The experimental results showed that the transverse force was increased with an increase of observed bending stiffness in-plane of the belt and initial inter-axial force. The transverse force acting on the belt running on the tapered pulley was classified into multiple components. Those were components of forces applied with the deflection of the inter-axial force according to the change of taper angle, the resultant force by the bending moment applied on the belt winding around the tapered pulley, and the reaction force applied due to the shearing deformation. The calculation result of the transverse force was almost agreed with experimental data when those components were formulated. It was also shown that the most contribution was specified to be the shearing deformation, regardless of the test conditions. This study found that transverse behavior of frictional flat belt driven by tapered pulley was explained by the summation of those components of forces.Keywords: skew, frictional flat belt, transverse force, tapered pulley
Procedia PDF Downloads 148264 Progressive Damage Analysis of Mechanically Connected Composites
Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan
Abstract:
While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values , and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.Keywords: puck, finite element, bolted joint, composite
Procedia PDF Downloads 103263 Digital Geological Map of the Loki Crystalline Massif (The Caucasus) and Its Multi-Informative Explanatory Note
Authors: Irakli Gamkrelidze, David Shengelia, Giorgi Chichinadze, Tamara Tsutsunava, Giorgi Beridze, Tamara Tsamalashvili, Ketevan Tedliashvili, Irakli Javakhishvili
Abstract:
The Caucasus is situated between the Eurasian and Africa-Arabian plates and represents a component of the Mediterranean (Alpine-Himalayan) collision belt. The Loki crystalline massif crops out within one of the terranes of the Caucasus – Baiburt-Sevanian terrane. By the end of 2018, a digital geological map (1:50 000) of the Loki massif was compiled. The presented map is of great importance for the region since there is no large-scale geological map which reflects the present standards of the geological study of the massif up to the last time. The existing State Geological Map of the Loki massif is very outdated. A new map drown by using GIS (Geographic Information System) technology is loaded with multi-informative details that include: specified contours of geological units and separate tectonic scales, key mineral assemblages and facies of metamorphism, temperature conditions of metamorphism, ages of metamorphism events and the massif rocks, genetic-geodynamic types of magmatic rocks. Explanatory note, attached to the map includes the large specter of scientific information. It contains characterization of the geological setting, composition and petrogenetic and geodynamic models of the massif formation. To create a geological map of the Loki crystalline massif, appropriate methodologies were applied: a sampling of rocks, GIS technology-based mapping of geological units, microscopic description of the material, composition analysis of rocks, microprobe analysis of minerals and a new interpretation of obtained data. To prepare a digital version of the map the appropriated activities were held including the creation of a common database. Finally, the design was created that includes the elaboration of legend and the final visualization of the map. The results of the study presented in the explanatory note are given below. The autochthonous gneissose quartz diorites of normal alkalinity and sub-alkaline gabbro-diorites included in them belong to different phases of magmatism. They represent “igneous” granites corresponding to mixed mantle-crustal type granites. Four tectonic plates of the allochthonous metamorphic complex–Lower Gorastskali, Sapharlo–Lok-Jandari, Moshevani, and Lower Gorastskali differ from each other by structure and degree of metamorphism. The initial rocks of these plates are formed in different geodynamic conditions and during the Early Bretonian orogeny while overthrusting due to tectonic compression they form a thick tectonic sheet. The Lower Gorastskali overthrust sheet is a fragment of ophiolitic association corresponding to the Paleotethys oceanic crust. The protolith of the ophiolitic complex basites corresponds to the tholeiitic series of basalts. The Sapharlo–Lok-Jandari overthrust sheet is metapelites, metamorphosed in conditions of greenschist facies of regional metamorphism. The regional metamorphism of Moshevani overthrust sheet crystalline schists quartzites corresponds to a range from greenschist to hornfels facies. The “mélange” is built of rock fragments and blocks of above-mentioned overthrust sheets. Sub-alkaline and normal alkaline post-metamorphic granites of the Loki crystalline massif belong to “igneous” and rarely to “sialic” and “anorogenic” types of granites.Keywords: digital geological map, 1:50 000 scale, crystalline massif, the caucasus
Procedia PDF Downloads 173262 A Model of the Universe without Expansion of Space
Authors: Jia-Chao Wang
Abstract:
A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction
Procedia PDF Downloads 135261 The Flooding Management Strategy in Urban Areas: Reusing Public Facilities Land as Flood-Detention Space for Multi-Purpose
Authors: Hsiao-Ting Huang, Chang Hsueh-Sheng
Abstract:
Taiwan is an island country which is affected by the monsoon deeply. Under the climate change, the frequency of extreme rainstorm by typhoon becomes more and more often Since 2000. When the extreme rainstorm comes, it will cause serious damage in Taiwan, especially in urban area. It is suffered by the flooding and the government take it as the urgent issue. On the past, the land use of urban planning does not take flood-detention into consideration. With the development of the city, the impermeable surface increase and most of the people live in urban area. It means there is the highly vulnerability in the urban area, but it cannot deal with the surface runoff and the flooding. However, building the detention pond in hydraulic engineering way to solve the problem is not feasible in urban area. The land expropriation is the most expensive construction of the detention pond in the urban area, and the government cannot afford it. Therefore, the management strategy of flooding in urban area should use the existing resource, public facilities land. It can archive the performance of flood-detention through providing the public facilities land with the detention function. As multi-use public facilities land, it also can show the combination of the land use and water agency. To this purpose, this research generalizes the factors of multi-use for public facilities land as flood-detention space with literature review. The factors can be divided into two categories: environmental factors and conditions of public facilities. Environmental factors including three factors: the terrain elevation, the inundation potential and the distance from the drainage system. In the other hand, there are six factors for conditions of public facilities, including area, building rate, the maximum of available ratio etc. Each of them will be according to it characteristic to given the weight for the land use suitability analysis. This research selects the rules of combination from the logical combination. After this process, it can be classified into three suitability levels. Then, three suitability levels will input to the physiographic inundation model for simulating the evaluation of flood-detention respectively. This study tries to respond the urgent issue in urban area and establishes a model of multi-use for public facilities land as flood-detention through the systematic research process of this study. The result of this study can tell which combination of the suitability level is more efficacious. Besides, The model is not only standing on the side of urban planners but also add in the point of view from water agency. Those findings may serve as basis for land use indicators and decision-making references for concerned government agencies.Keywords: flooding management strategy, land use suitability analysis, multi-use for public facilities land, physiographic inundation model
Procedia PDF Downloads 359260 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 99259 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms
Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga
Abstract:
Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.Keywords: anomaly detection, clustering, pattern recognition, web sessions
Procedia PDF Downloads 288258 Metagenomic Assessment of the Effects of Genetically Modified Crops on Microbial Ecology and Physicochemical Properties of Soil
Authors: Falana Yetunde Olaitan, Ijah U. J. J, Solebo Shakirat O.
Abstract:
Genetically modified crops are already phenomenally successful and are grown worldwide in more than eighteen countries on more than 67 million hectares. Nigeria, in October 2018, approved Bacillus thuringiensis (Bt) cotton and maize; therefore, the need to carry out environmental risk assessment studies. A total of 15 4L octagonal ceramic pots were filled with 4kg of soil and placed on the bench in 2 rows of 10 pots each and the 3rd row of 5 pots, 1st-row pots were used to plant GM cotton seeds, while the 2nd-row pots were used for non-GM cotton seeds and the 3rd row of 5 pots served as control, all in the screen house. Soil samples for metagenomic DNA extraction were collected at random and at the monthly interval after planting at a distance of 2mm from the plant’s root and at a depth of 10cm using a sterile spatula. Soil samples for physicochemical analysis were collected before planting and after harvesting the GM and non-GM crops as well as from the control soil. The DNA was extracted, quantified and sequenced; Sample 1A (DNA from GM cotton Soil at 1st interval) gave the lowest sequence read with 0.853M while sample 2B (DNA from GM cotton Soil at 2nd interval) gave the highest with 5.785M, others gave between 1.8M and 4.7M. The samples treatment were grouped into four, Group 1 (GM cotton soil from 1 to 3 intervals) had between 800,000 and 5,700,000 strains of microbes (SOM), Group 2 (non GM cotton soil from 1 to 3 intervals) had between 1,400,600 and 4,200,000 SOM, Group 3 (control soil) had between 900,000 and 3,600,000 SOM and Group 4 (initial soil) had between 3,700,000 and 4,000,000 SOM. The microbes observed were predominantly bacteria (including archaea), fungi, dark matter alongside protists and phages. The predominant bacterial groups were the Terrabacteria (Bacillus funiculus, Bacillus sp.), the Proteobacteria (Microvirga massiliensis, sphingomonas sp.) and the Archaea (Nitrososphaera sp.), while the fungi were Aspergillus fischeri and Fusarium falciforme. The comparative analysis between groups was done using JACCARD PERMANOVA beta diversity analysis at P-value not more than 0.76 and there was no significant pair found. The pH for initial, GM cotton, non-GM cotton and control soil were 6.28, 6.26, 7.25, 8.26 and the percentage moisture was 0.63, 0.78, 0.89 and 0.82, respectively, while the percentage Nitrogen was observed to be 17.79, 1.14, 1.10 and 0.56 respectively. Other parameters include, varying concentrations of Potassium (0.46, 1,284.47, 1,785.48, 1,252.83 mg/kg) and Phosphorus (18.76, 17.76, 16.87, 15.23 mg/kg) were recorded for the four treatments respectively. The soil consisted mainly of silt (32.09 to 34.66%) and clay (58.89 to 60.23%), reflecting the soil texture as silty – clay. The results were then tested with ANOVA at less than 0.05 P-value and no pair was found to be significant as well. The results suggest that the GM crops have no significant effect on microbial ecology and physicochemical properties of the soil and, in turn, no direct or indirect effects on human health.Keywords: genetically modified crop, microbial ecology, physicochemical properties, metagenomics, DNA, soil
Procedia PDF Downloads 146257 Digitalization, Economic Growth and Financial Sector Development in Africa
Authors: Abdul Ganiyu Iddrisu
Abstract:
Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.Keywords: digitalization, economic growth, financial sector development, Africa
Procedia PDF Downloads 103256 Architecture for Hearing Impaired: A Study on Conducive Learning Environments for Deaf Children with Reference to Sri Lanka
Authors: Champa Gunawardana, Anishka Hettiarachchi
Abstract:
Conducive Architecture for learning environments is an area of interest for many scholars around the world. Loss of sense of hearing leads to the assumption that deaf students are visual learners. Comprehending favorable non-hearing attributes of architecture can lead to effective, rich and friendly learning environments for hearing impaired. The objective of the current qualitative investigation is to explore the nature and parameters of a sense of place of deaf children to support optimal learning. The investigation was conducted with hearing-impaired children (age: between 8-19, Gender: 15 male and 15 female) of Yashodhara deaf and blind school at Balangoda, Sri Lanka. A sensory ethnography study was adopted to identify the nature of perception and the parameters of most preferred and least preferred spaces of the learning environment. The common perceptions behind most preferred places in the learning environment were found as being calm and quiet, sense of freedom, volumes characterized by openness and spaciousness, sense of safety, wide spaces, privacy and belongingness, less crowded, undisturbed, availability of natural light and ventilation, sense of comfort and the view of green colour in the surroundings. On the other hand, the least preferred spaces were found to be perceived as dark, gloomy, warm, crowded, lack of freedom, smells (bad), unsafe and having glare. Perception of space by deaf considering the hierarchy of sensory modalities involved was identified as; light - color perception (34 %), sight - visual perception (32%), touch - haptic perception (26%), smell - olfactory perception (7%) and sound – auditory perception (1%) respectively. Sense of freedom (32%) and sense of comfort (23%) were the predominant psychological parameters leading to an optimal sense of place perceived by hearing impaired. Privacy (16%), rhythm (14%), belonging (9%) and safety (6%) were found as secondary factors. Open and wide flowing spaces without visual barriers, transparent doors and windows or open port holes to ease their communication, comfortable volumes, naturally ventilated spaces, natural lighting or diffused artificial lighting conditions without glare, sloping walkways, wider stairways, walkways and corridors with ample distance for signing were identified as positive characteristics of the learning environment investigated.Keywords: deaf, visual learning environment, perception, sensory ethnography
Procedia PDF Downloads 231255 Risks and Values in Adult Safeguarding: An Examination of How Social Workers Screen Safeguarding Referrals from Residential Homes
Authors: Jeremy Dixon
Abstract:
Safeguarding adults forms a core part of social work practice. The Government in England and Wales has made efforts to standardise practices through The Care Act 2014. The Act states that local authorities have duties to make inquiries in cases where an adult with care or support needs is experiencing or at risk of abuse and is unable to protect themselves from abuse or neglect. Despite the importance given to safeguarding adults within law there remains little research about how social workers conduct such decisions on the ground. This presentation reports on findings from a pilot research study conducted within two social work teams in a Local Authority in England. The objective of the project was to find out how social workers interpreted safeguarding duties as laid out by The Care Act 2014 with a particular focus on how workers assessed and managed risk. Ethnographic research methods were used throughout the project. This paper focusses specifically on decisions made by workers in the assessment team. The paper reports on qualitative observation and interviews with five workers within this team. Drawing on governmentality theory, this paper analyses the techniques used by workers to manage risk from a distance. A high proportion of safeguarding referrals came from care workers or managers in residential care homes. Social workers conducting safeguarding assessments were aware that they had a duty to work in partnership with these agencies. However, their duty to safeguard adults also meant that they needed to view them as potential abusers. In making judgments about when it was proportionate to refer for a safeguarding assessment workers drew on a number of common beliefs about residential care workers which were then tested in conversations with them. Social workers held the belief that residential homes acted defensively, leading them to report any accident or danger. Social workers therefore encouraged residential workers to consider whether statutory criteria had been met and to use their own procedures to manage risk. In addition social workers carried out an assessment of the workers’ motives; specifically whether they were using safeguarding procedures as a shortcut for avoiding other assessments or as a means of accessing extra resources. Where potential abuse was identified social workers encouraged residential homes to use disciplinary policies as a means of isolating and managing risk. The study has implications for understanding risk within social work practice. It shows that whilst social workers use law to govern individuals, these laws are interpreted against cultural values. Additionally they also draw on assumptions about the culture of others.Keywords: adult safeguarding, governmentality, risk, risk assessment
Procedia PDF Downloads 293254 Numerical Simulation of Von Karman Swirling Bioconvection Nanofluid Flow from a Deformable Rotating Disk
Authors: Ali Kadir, S. R. Mishra, M. Shamshuddin, O. Anwar Beg
Abstract:
Motivation- Rotating disk bio-reactors are fundamental to numerous medical/biochemical engineering processes including oxygen transfer, chromatography, purification and swirl-assisted pumping. The modern upsurge in biologically-enhanced engineering devices has embraced new phenomena including bioconvection of micro-organisms (photo-tactic, oxy-tactic, gyrotactic etc). The proven thermal performance superiority of nanofluids i.e. base fluids doped with engineered nanoparticles has also stimulated immense implementation in biomedical designs. Motivated by these emerging applications, we present a numerical thermofluid dynamic simulation of the transport phenomena in bioconvection nanofluid rotating disk bioreactor flow. Methodology- We study analytically and computationally the time-dependent three-dimensional viscous gyrotactic bioconvection in swirling nanofluid flow from a rotating disk configuration. The disk is also deformable i.e. able to extend (stretch) in the radial direction. Stefan blowing is included. The Buongiorno dilute nanofluid model is adopted wherein Brownian motion and thermophoresis are the dominant nanoscale effects. The primitive conservation equations for mass, radial, tangential and axial momentum, heat (energy), nanoparticle concentration and micro-organism density function are formulated in a cylindrical polar coordinate system with appropriate wall and free stream boundary conditions. A mass convective condition is also incorporated at the disk surface. Forced convection is considered i.e. buoyancy forces are neglected. This highly nonlinear, strongly coupled system of unsteady partial differential equations is normalized with the classical Von Karman and other transformations to render the boundary value problem (BVP) into an ordinary differential system which is solved with the efficient Adomian decomposition method (ADM). Validation with earlier Runge-Kutta shooting computations in the literature is also conducted. Extensive computations are presented (with the aid of MATLAB symbolic software) for radial and circumferential velocity components, temperature, nanoparticle concentration, micro-organism density number and gradients of these functions at the disk surface (radial local skin friction, local circumferential skin friction, Local Nusselt number, Local Sherwood number, motile microorganism mass transfer rate). Main Findings- Increasing radial stretching parameter decreases radial velocity and radial skin friction, reduces azimuthal velocity and skin friction, decreases local Nusselt number and motile micro-organism mass wall flux whereas it increases nano-particle local Sherwood number. Disk deceleration accelerates the radial flow, damps the azimuthal flow, decreases temperatures and thermal boundary layer thickness, depletes the nano-particle concentration magnitudes (and associated nano-particle species boundary layer thickness) and furthermore decreases the micro-organism density number and gyrotactic micro-organism species boundary layer thickness. Increasing Stefan blowing accelerates the radial flow and azimuthal (circumferential flow), elevates temperatures of the nanofluid, boosts nano-particle concentration (volume fraction) and gyrotactic micro-organism density number magnitudes whereas suction generates the reverse effects. Increasing suction effect reduces radial skin friction and azimuthal skin friction, local Nusselt number, and motile micro-organism wall mass flux whereas it enhances the nano-particle species local Sherwood number. Conclusions - Important transport characteristics are identified of relevance to real bioreactor nanotechnological systems not discussed in previous works. ADM is shown to achieve very rapid convergence and highly accurate solutions and shows excellent promise in simulating swirling multi-physical nano-bioconvection fluid dynamics problems. Furthermore, it provides an excellent complement to more general commercial computational fluid dynamics simulations.Keywords: bio-nanofluids, rotating disk bioreactors, Von Karman swirling flow, numerical solutions
Procedia PDF Downloads 157253 The Scientific Study of the Relationship Between Physicochemical and Microstructural Properties of Ultrafiltered Cheese: Protein Modification and Membrane Separation
Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh
Abstract:
The loss of curd cohesiveness and syneresis are two common problems in the ultrafiltered cheese industry. In this study, by using membrane technology and protein modification, a modified cheese was developed and its properties were compared with a control sample. In order to decrease the lactose content and adjust the protein, acidity, dry matter and milk minerals, a combination of ultrafiltration, nanofiltration and reverse osmosis technologies was employed. For protein modification, a two-stage chemical and enzymatic reaction was employed before and after ultrafiltration. The physicochemical and microstructural properties of the modified ultrafiltered cheese were compared with the control one. Results showed that the modified protein enhanced the functional properties of the final cheese significantly (pvalue< 0.05), even if the protein content was 50% lower than the control one. The modified cheese showed 21 ± 0.70, 18 ± 1.10 & 25±1.65% higher hardness, cohesiveness and water-holding capacity values, respectively, than the control sample. This behavior could be explained by the developed microstructure of the gel network. Furthermore, chemical-enzymatic modification of milk protein induced a significant change in the network parameter of the final cheese. In this way, the indices of network linkage strength, network linkage density, and time scale of junctions were 10.34 ± 0.52, 68.50 ± 2.10 & 82.21 ± 3.85% higher than the control sample, whereas the distance between adjacent linkages was 16.77 ± 1.10% lower than the control sample. These results were supported by the results of the textural analysis. A non-linear viscoelastic study showed a triangle waveform stress of the modified protein contained cheese, while the control sample showed rectangular waveform stress, which suggested a better sliceability of the modified cheese. Moreover, to study the shelf life of the products, the acidity, as well as molds and yeast population, were determined in 120 days. It’s worth mentioning that the lactose content of modified cheese was adjusted at 2.5% before fermentation, while the lactose of the control one was at 4.5%. The control sample showed 8 weeks shelf life, while the shelf life of the modified cheese was 18 weeks in the refrigerator. During 18 weeks, the acidity of modified and control samples increased from 82 ± 1.50 to 94 ± 2.20 °D and 88 ± 1.64 to 194 ± 5.10 °D, respectively. The mold and yeast populations, with time, followed the semicircular shape model (R2 = 0.92, R2adj = 0.89, RMSE = 1.25). Furthermore, the mold and yeast counts and their growth rate in the modified cheese were lower than those for control one; Aforementioned result could be explained by the shortage of the source of energy for the microorganism in the modified cheese. The lactose content of the modified sample was less than 0.2 ± 0.05% at the end of fermentation, while this was 3.7 ± 0.68% in the control sample.Keywords: non-linear viscoelastic, protein modification, semicircular shape model, ultrafiltered cheese
Procedia PDF Downloads 75