Search results for: distance matrix API
457 Bread-Making Properties of Rice Flour Dough Using Fatty Acid Salt
Authors: T. Hamaishi, Y. Morinaga, H. Morita
Abstract:
Introduction: Rice consumption in Japan has decreased, and Japanese government has recommended use of rice flour in order to expand the consumption of rice. There are two major protein components present in flour, called gliadin and glutenin. Gluten forms when water is added to flour and is mixed. As mixing continues, glutenin interacts with gliadin to form viscoelastic matrix of gluten. Rice flour bread does not expand as much as wheat flour bread. Because rice flour is not included gluten, it cannot construct gluten network in the dough. In recent years, some food additives have been used for dough-improving agent in bread making, especially surfactants has effect in order to improve dough extensibility. Therefore, we focused to fatty acid salt which is one of anionic surfactants. Fatty acid salt is a salt consist of fatty acid and alkali, it is main components of soap. According to JECFA(FAO/WHO Joint Expert Committee on Food Additives), salts of Myristic(C14), Palmitic(C16) and Stearic(C18) could be used as food additive. They have been evaluated ADI was not specified. In this study, we investigated to improving bread-making properties of rice flour dough adding fatty acid salt. Materials and methods: The sample of fatty acid salt is myristic (C14) dissolved in KOH solution to a concentration of 350 mM and pH 10.5. Rice dough was consisted of 100 g of flour using rice flour and wheat gluten, 5 g of sugar, 1.7 g of salt, 1.7g of dry yeast, 80 mL of water and fatty acid salt. Mixing was performed for 500 times by using hand. The concentration of C14K in the dough was 10 % relative to flour weight. Amount of gluten in the dough was 20 %, 30 % relative to flour weight. Dough expansion ability test was performed to measure physical property of bread dough according to the methods of Baker’s Yeast by Japan Yeast Industry Association. In this test, 150 g of dough was filled from bottom of the cylinder and fermented at 30 °C,85 % humidity for 120 min on an incubator. The height of the expansion in the dough was measured and determined its expansion ability. Results and Conclusion: Expansion ability of rice dough with gluten content of 20 %, 30% showed 316 mL, 341 mL for 120 min. When C14K adding to the rice dough, dough expansion abilities were 314 mL, 368 mL for 120 min, there was no significant difference. Conventionally it has been known that the rice flour dough contain gluten of 20 %. The considerable improvement of dough expansion ability was achieved when added C14K to wheat flour. The experimental result shows that c14k adding to the rice dough with gluten content more than 20 % was not improving bread-making properties. In conclusion, rice bread made with gluten content more than 20 % without C14K has been suggested to contribute to the formation of the sufficient gluten network.Keywords: expansion ability, fatty acid salt, gluten, rice flour dough
Procedia PDF Downloads 244456 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets
Authors: Ece Cigdem Mutlu, Burak Alakent
Abstract:
Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.Keywords: average run length, M-estimators, quality control, robust estimators
Procedia PDF Downloads 190455 Envisioning The Future of Language Learning: Virtual Reality, Mobile Learning and Computer-Assisted Language Learning
Authors: Jasmin Cowin, Amany Alkhayat
Abstract:
This paper will concentrate on a comparative analysis of both the advantages and limitations of using digital learning resources (DLRs). DLRs covered will be Virtual Reality (VR), Mobile Learning (M-learning) and Computer-Assisted Language Learning (CALL) together with their subset, Mobile Assisted Language Learning (MALL) in language education. In addition, best practices for language teaching and the application of established language teaching methodologies such as Communicative Language Teaching (CLT), the audio-lingual method, or community language learning will be explored. Education has changed dramatically since the eruption of the pandemic. Traditional face-to-face education was disrupted on a global scale. The rise of distance learning brought new digital tools to the forefront, especially web conferencing tools, digital storytelling apps, test authoring tools, and VR platforms. Language educators raced to vet, learn, and implement multiple technology resources suited for language acquisition. Yet, questions remain on how to harness new technologies, digital tools, and their ubiquitous availability while using established methods and methodologies in language learning paired with best teaching practices. In M-learning language, learners employ portable computing devices such as smartphones or tablets. CALL is a language teaching approach using computers and other technologies through presenting, reinforcing, and assessing language materials to be learned or to create environments where teachers and learners can meaningfully interact. In VR, a computer-generated simulation enables learner interaction with a 3D environment via screen, smartphone, or a head mounted display. Research supports that VR for language learning is effective in terms of exploration, communication, engagement, and motivation. Students are able to relate through role play activities, interact with 3D objects and activities such as field trips. VR lends itself to group language exercises in the classroom with target language practice in an immersive, virtual environment. Students, teachers, schools, language institutes, and institutions benefit from specialized support to help them acquire second language proficiency and content knowledge that builds on their cultural and linguistic assets. Through the purposeful application of different language methodologies and teaching approaches, language learners can not only make cultural and linguistic connections in DLRs but also practice grammar drills, play memory games or flourish in authentic settings.Keywords: language teaching methodologies, computer-assisted language learning, mobile learning, virtual reality
Procedia PDF Downloads 238454 The Effects of Geographical and Functional Diversity of Collaborators on Quality of Knowledge Generated
Authors: Ajay Das, Sandip Basu
Abstract:
Introduction: There is increasing recognition that diverse streams of knowledge can often be recombined in novel ways to generate new knowledge. However, knowledge recombination theory has not been applied to examine the effects of collaborator diversity on the quality of knowledge such collaborators produce. This is surprising because one would expect that a collaborative team with certain aspects of diversity should be able to recombine process elements related to knowledge development, which are relatively tacit, but also complementary because of the collaborator’s varying backgrounds. Theory and Hypotheses: We propose to examine two aspects of diversity in the environments of collaborative teams to try and capture such potential recombinations of relatively tacit, process knowledge. The first aspect of diversity in team members’ environments is geographical. Collaborators with more geographical distance between them (perhaps working in different countries) often have more autonomy in the processes they adopt for knowledge development. In the absence of overt monitoring, such collaborators are likely to adopt differing approaches to knowledge development. The sharing of such varying approaches among collaborators is likely to result in greater quality of the common collaborative pursuit. The second aspect is diversity in the work backgrounds of team members. Such diversity can also increase the potential for knowledge recombination. For example, if one or more members are from a manufacturing center (versus all of them being from a purely R&D center), such members will provide unique perspectives on the implementation of innovative ideas. Again, knowledge that has been evaluated from these diverse perspectives is likely to be of a higher quality. In addition to the above aspects of environmental diversity among team members, we also plan to examine the extent to which individual collaborators are in different environments from the primary innovation center of their employing firms. Proposed Methods: We will test our model on a sample of firms in the semiconductor industry. Our level of analysis will be individual patents generated by these firms and the teams involved in the generation of these. Information on manufacturing activities of our sample firms will be obtained from SEMI, a proprietary database of the semiconductor industry, as well as company 10-K reports. Conclusion: We believe that our results will represent a preliminary attempt to understand how various forms of diversity in collaborative teams impact the knowledge development process. Our dependent variable of knowledge quality is important to study since higher values of this variable can not only drive firm performance but the broader development of regions and societies through spillover impacts on future innovation. The results of this study will, therefore, inform future research and practice in innovation, geographical location, and vertical integration.Keywords: innovation, manufacturing strategy, knowledge, diversity
Procedia PDF Downloads 352453 Transverse Behavior of Frictional Flat Belt Driven by Tapered Pulley -Change of Transverse Force Under Driving State–
Authors: Satoko Fujiwara, Kiyotaka Obunai, Kazuya Okubo
Abstract:
A skew is one of important problems for designing the conveyor and transmission with frictional flat belt, in which running belt is deviated in width direction due to the transverse force applied to the belt. The skew often not only degrades the stability of the path of belt but also causes some damages of the belt and auxiliary machines. However, the transverse behavior such as the skew has not been discussed quantitatively in detail for frictional belts. The objective of this study is to clarify the transverse behavior of frictional flat belt driven by tapered pulley. Commercially available rubber flat belt reinforced by polyamide film was prepared as the test belt where the thickness and length were 1.25 mm and 630 mm, respectively. Test belt was driven between two pulleys made of aluminum alloy, where diameter and inter-axial length were 50 mm and 150 mm, respectively. Some tapered pulleys were applied where tapered angles were 0 deg (for comparison), 2 deg, 4 deg, and 6 deg. In order to alternatively investigate the transverse behavior, the transverse force applied to the belt was measured when the skew was constrained at the string under driving state. The transverse force was measured by a load cell having free rollers contacting on the side surface of the belt when the displacement in the belt width direction was constrained. The conditions of observed bending stiffness in-plane of the belt were changed by preparing three types of belts (the width of the belt was 20, 30, and 40 mm) where their observed stiffnesses were changed. The contributions of the bending stiffness in-plane of belt and initial inter-axial force to the transverse were discussed in experiments. The inter-axial force was also changed by setting a distance (about 240 mm) between the two pulleys. Influence of observed bending stiffness in-plane of the belt and initial inter-axial force on the transverse force were investigated. The experimental results showed that the transverse force was increased with an increase of observed bending stiffness in-plane of the belt and initial inter-axial force. The transverse force acting on the belt running on the tapered pulley was classified into multiple components. Those were components of forces applied with the deflection of the inter-axial force according to the change of taper angle, the resultant force by the bending moment applied on the belt winding around the tapered pulley, and the reaction force applied due to the shearing deformation. The calculation result of the transverse force was almost agreed with experimental data when those components were formulated. It was also shown that the most contribution was specified to be the shearing deformation, regardless of the test conditions. This study found that transverse behavior of frictional flat belt driven by tapered pulley was explained by the summation of those components of forces.Keywords: skew, frictional flat belt, transverse force, tapered pulley
Procedia PDF Downloads 147452 Progressive Damage Analysis of Mechanically Connected Composites
Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan
Abstract:
While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values , and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.Keywords: puck, finite element, bolted joint, composite
Procedia PDF Downloads 102451 Simulation of Technological, Energy and GHG Comparison between a Conventional Diesel Bus and E-bus: Feasibility to Promote E-bus Change in High Lands Cities
Authors: Riofrio Jonathan, Fernandez Guillermo
Abstract:
Renewable energy represented around 80% of the energy matrix for power generation in Ecuador during 2020, so the deployment of current public policies is focused on taking advantage of the high presence of renewable sources to carry out several electrification projects. These projects are part of the portfolio sent to the United Nations Framework on Climate Change (UNFCCC) as a commitment to reduce greenhouse gas emissions (GHG) in the established national determined contribution (NDC). In this sense, the Ecuadorian Organic Energy Efficiency Law (LOEE) published in 2019 promotes E-mobility as one of the main milestones. In fact, it states that the new vehicles for urban and interurban usage must be E-buses since 2025. As a result, and for a successful implementation of this technological change in a national context, it is important to deploy land surveys focused on technical and geographical areas to keep the quality of services in both the electricity and transport sectors. Therefore, this research presents a technological and energy comparison between a conventional diesel bus and its equivalent E-bus. Both vehicles fulfill all the technical requirements to ride in the study-case city, which is Ambato in the province of Tungurahua-Ecuador. In addition, the analysis includes the development of a model for the energy estimation of both technologies that are especially applied in a highland city such as Ambato. The altimetry of the most important bus routes in the city varies from 2557 to 3200 m.a.s.l., respectively, for the lowest and highest points. These operation conditions provide a grade of novelty to this paper. Complementary, the technical specifications of diesel buses are defined following the common features of buses registered in Ambato. On the other hand, the specifications for E-buses come from the most common units introduced in Latin America because there is not enough evidence in similar cities at the moment. The achieved results will be good input data for decision-makers since electric demand forecast, energy savings, costs, and greenhouse gases emissions are computed. Indeed, GHG is important because it allows reporting the transparency framework that it is part of the Paris Agreement. Finally, the presented results correspond to stage I of the called project “Analysis and Prospective of Electromobility in Ecuador and Energy Mix towards 2030” supported by Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ).Keywords: high altitude cities, energy planning, NDC, e-buses, e-mobility
Procedia PDF Downloads 151450 Environmental Interactions in Riparian Vegetation Cover in an Urban Stream Corridor: A Case Study of Duzce Asar Suyu
Authors: Engin Eroğlu, Oktay Yıldız, Necmi Aksoy, Akif Keten, Mehmet Kıvanç Ak, Şeref Keskin, Elif Atmaca, Sertaç Kaya
Abstract:
Nowadays, green spaces in urban areas are under threat and decreasing their percentages in the urban areas because of increasing population, urbanization, migration, and some cultural changes in quality. An important element of the natural landscape water and water-related natural ecosystems are exposed to corruption due to these pressures. A landscape has owned many different types of elements or units, a more dominant structure than other landscapes as good or bad perceptible extent different direction and variable reveals a unique structure and character of the landscape. Whereas landscapes deal with two main groups as urban and rural according to their location on the world, especially intersection areas of urban and rural named semi-urban or semi-rural present variety landscape features. The main components of the landscape are defined as patch-matrix-corridor. The corridors include quite various vegetation types such as riparian, wetland and the others. In urban areas, natural water corridors are an important elements of the diversity of the riparian vegetation cover. In particular, water corridors attract attention with a natural diversity and lack of fragmentation, degradation and artificial results. Thanks to these features, without a doubt, water corridors are the important component of all cities in the world. These corridors not only divide the city into two separate sides, but also assured the ecological connectivity between the two sides of the city. The main objective of this study is to determine the vegetation and habitat features of urban stream corridor according to environmental interactions. Within this context, this study will be realized that 'Asar Suyu' is an important component of the city of Düzce. Moreover, the riparian zone touched contiguous area borders of the city and overlaid the urban development limits of the city, determining of characteristics of the corridor will be carried out as floristic and habitat analysis. Consequently, vegetation structure and habitat features which play an important role between riparian zone vegetation covers and environmental interaction will be determined. This study includes first results of The Scientific and Technological Research Council of Turkey (TUBITAK-116O596; 'Determining of Landscape Character of Urban Water Corridors as Visual and Ecological; A Case Study of Asar Suyu in Duzce').Keywords: corridor, Duzce, landscape ecology, riparian vegetation
Procedia PDF Downloads 337449 Identifying Large-Scale Photovoltaic and Concentrated Solar Power Hot Spots: Multi-Criteria Decision-Making Framework
Authors: Ayat-Allah Bouramdane
Abstract:
Solar Photovoltaic (PV) and Concentrated Solar Power (CSP) do not burn fossil fuels and, therefore, could meet the world's needs for low-carbon power generation as they do not release greenhouse gases into the atmosphere as they generate electricity. The power output of the solar PV module and CSP collector is proportional to the temperature and the amount of solar radiation received by their surface. Hence, the determination of the most convenient locations of PV and CSP systems is crucial to maximizing their output power. This study aims to provide a hands-on and plausible approach to the multi-criteria evaluation of site suitability of PV and CSP plants using a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP). Applying the GRI-based AHP approach is meant to specify the criteria and sub-criteria, to identify the unsuitable areas, the low-, moderate-, high- and very high suitable areas for each layer of GRI, to perform the pairwise comparison matrix at each level of the hierarchy structure based on experts' knowledge, and calculate the weights using AHP to create the final map of solar PV and CSP plants suitability in Morocco with a particular focus on the Dakhla city. The results recognize that solar irradiation is the main decision factor for the integration of these technologies on energy policy goals of Morocco but explicitly account for other factors that cannot only limit the potential of certain locations but can even exclude the Dakhla city classified as unsuitable area. We discuss the sensitivity of the PV and CSP site suitability to different aspects, such as the methodology, the climate conditions, and the technology used in each source, and provide the final recommendations to the Moroccan energy strategy by analyzing if actual Morocco's PV and CSP installations are located within areas deemed suitable and by discussing several cases to provide mutual benefits across the Food-Energy-Water nexus. The adapted methodology and conducted suitability map could be used by researchers or engineers to provide helpful information for decision-makers in terms of sites selection, design, and planning of future solar plants, especially in areas suffering from energy shortages, such as the Dakhla city, which is now one of Africa's most promising investment hubs and it is especially attractive to investors looking to root their operations in Africa and import to European markets.Keywords: analytic hierarchy process, concentrated solar power, dakhla, geographic referenced information, Morocco, multi-criteria decision-making, photovoltaic, site suitability
Procedia PDF Downloads 173448 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions
Authors: M. Tarik Boyraz, M. Bilge Imer
Abstract:
Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.Keywords: heat treatment, IN738LC, simulations, super-alloys
Procedia PDF Downloads 248447 A Model of the Universe without Expansion of Space
Authors: Jia-Chao Wang
Abstract:
A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction
Procedia PDF Downloads 134446 Policies to Reduce the Demand and Supply of Illicit Drugs in the Latin America: 2004 to 2016
Authors: Ana Caroline Ibrahim Lino, Denise Bomtempo Birche de Carvalho
Abstract:
The background of this research is the international process of control and monitoring of illicit psychoactive substances that has commenced in the early 20th century. This process was intensified with the UN Single Convention on Narcotic Drugs of 1961 and had its culmination in the 1970s with the "War on drugs", a doctrine undertaken by the United States of America. Since then, the phenomenon of drug prohibition has been pushing debates around alternatives of public policies to confront their consequences at a global level and in the specific context of Latin America. Previous research has answered the following key questions: a) With what characteristics and models has the international illicit drug control system consolidated in Latin America with the creation of the Organization of American States (OAS) and the Inter-American Drug Abuse Control Commission (CICAD)? b) What drug policies and programs were determined as guidelines for the member states by the OAS and CICAD? The present paper mainly addresses the analysis of the drug strategies developed by the OAS/CICAD for the Americas from 2004 to 2016. The primary sources have been extracted from the OAS/CICAD documents and reports, listed on the Internet sites of these organizations. Secondary sources refer to bibliographic research on the subject with the following descriptors: illicit drugs, public policies, international organizations, OAS, CICAD, and reducing the demand and supply of illicit drugs. The "content analysis" technique was used to organize the collected material and to choose the axes of analysis. The results show that the policies, strategies, and action plans for Latin America had been focused on anti-drug actions since the creation of the Commission until 2010. The discourses and policies to reduce drug demand and supply were of great importance for solving the problem. However, the real focus was on eliminating the substances by controlling the production, marketing, and distribution of illicit drugs. Little attention was given to the users and their families. The research is of great relevance to the Social Work. The guidelines and parameters of the Social Worker's profession are in line with the need for social, ethical, and political strengthening of any dimension that guarantees the rights of users of psychoactive substances. In addition, it contributed to the understanding of the political, economic, social, and cultural factors that structure the prohibitionism, whose matrix anchors the deprivation of rights and violence.Keywords: illicit drug policies, international organizations, latin America, prohibitionism, reduce the demand and supply of illicit drugs
Procedia PDF Downloads 161445 The Flooding Management Strategy in Urban Areas: Reusing Public Facilities Land as Flood-Detention Space for Multi-Purpose
Authors: Hsiao-Ting Huang, Chang Hsueh-Sheng
Abstract:
Taiwan is an island country which is affected by the monsoon deeply. Under the climate change, the frequency of extreme rainstorm by typhoon becomes more and more often Since 2000. When the extreme rainstorm comes, it will cause serious damage in Taiwan, especially in urban area. It is suffered by the flooding and the government take it as the urgent issue. On the past, the land use of urban planning does not take flood-detention into consideration. With the development of the city, the impermeable surface increase and most of the people live in urban area. It means there is the highly vulnerability in the urban area, but it cannot deal with the surface runoff and the flooding. However, building the detention pond in hydraulic engineering way to solve the problem is not feasible in urban area. The land expropriation is the most expensive construction of the detention pond in the urban area, and the government cannot afford it. Therefore, the management strategy of flooding in urban area should use the existing resource, public facilities land. It can archive the performance of flood-detention through providing the public facilities land with the detention function. As multi-use public facilities land, it also can show the combination of the land use and water agency. To this purpose, this research generalizes the factors of multi-use for public facilities land as flood-detention space with literature review. The factors can be divided into two categories: environmental factors and conditions of public facilities. Environmental factors including three factors: the terrain elevation, the inundation potential and the distance from the drainage system. In the other hand, there are six factors for conditions of public facilities, including area, building rate, the maximum of available ratio etc. Each of them will be according to it characteristic to given the weight for the land use suitability analysis. This research selects the rules of combination from the logical combination. After this process, it can be classified into three suitability levels. Then, three suitability levels will input to the physiographic inundation model for simulating the evaluation of flood-detention respectively. This study tries to respond the urgent issue in urban area and establishes a model of multi-use for public facilities land as flood-detention through the systematic research process of this study. The result of this study can tell which combination of the suitability level is more efficacious. Besides, The model is not only standing on the side of urban planners but also add in the point of view from water agency. Those findings may serve as basis for land use indicators and decision-making references for concerned government agencies.Keywords: flooding management strategy, land use suitability analysis, multi-use for public facilities land, physiographic inundation model
Procedia PDF Downloads 358444 Geochemical Characterization for Identification of Hydrocarbon Generation: Implication of Unconventional Gas Resources
Authors: Yousif M. Makeen
Abstract:
This research will address the processes of geochemical characterization and hydrocarbon generation process occurring within hydrocarbon source and/or reservoir rocks. The geochemical characterization includes organic-inorganic associations that influence the storage capacity of unconventional hydrocarbon resources (e.g. shale gas) and the migration process of oil/gas of the petroleum source/reservoir rocks. Kerogen i.e. the precursor of petroleum, occurs in various forms and types, may either be oil-prone, gas-prone, or both. China has a number of petroleum-bearing sedimentary basins commonly associated with shale gas, oil sands, and oil shale. Taken Sichuan basin as a selected basin in this study, the Sichuan basin has recorded notable successful discoveries of shale gas especially in the marine shale reservoirs within the area. However, a notable discoveries of lacustrine shale in the North-Este Fuling area indicate the accumulation of shale gas within non-marine source rock. The objective of this study is to evaluate the hydrocarbon storage capacity, generation, and retention processes in the rock matrix of hydrocarbon source/reservoir rocks within the Sichuan basin using an advanced X-ray tomography 3D imaging computational technology, commonly referred to as Micro-CT, SEM (Scanning Electron Microscope), optical microscope as well as organic geochemical facilities (e.g. vitrinite reflectance and UV light). The preliminary results of this study show that the lacustrine shales under investigation are acting as both source and reservoir rocks, which are characterized by very fine grains and very low permeability and porosity. Three pore structures have also been characterized in the study in the lacustrine shales, including organic matter pores, interparticle pores and intraparticle pores using x-ray Computed Tomography (CT). The benefits of this study would be a more successful oil and gas exploration and higher recovery factor, thus having a direct economic impact on China and the surrounding region. Methodologies: SRA TOC/TPH or Rock-Eval technique will be used to determine the source rock richness (S1 and S2) and Tmax. TOC analysis will be carried out using a multi N/C 3100 analyzer. The SRA and TOC results were used in calculating other parameters such as hydrogen index (HI) and production index (PI). This analysis will indicate the quantity of the organic matter. Minimum TOC limits generally accepted as essential for a source-rock are 0.5% for shales and 0.2% for carbonates. Contributions: This research could solve issues related to oil potential, provide targets, and serve as a pathfinder to future exploration activity in the Sichuan basin.Keywords: shale gas, unconventional resources, organic chemistry, Sichuan basin
Procedia PDF Downloads 37443 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 98442 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms
Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga
Abstract:
Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.Keywords: anomaly detection, clustering, pattern recognition, web sessions
Procedia PDF Downloads 288441 Metagenomic Assessment of the Effects of Genetically Modified Crops on Microbial Ecology and Physicochemical Properties of Soil
Authors: Falana Yetunde Olaitan, Ijah U. J. J, Solebo Shakirat O.
Abstract:
Genetically modified crops are already phenomenally successful and are grown worldwide in more than eighteen countries on more than 67 million hectares. Nigeria, in October 2018, approved Bacillus thuringiensis (Bt) cotton and maize; therefore, the need to carry out environmental risk assessment studies. A total of 15 4L octagonal ceramic pots were filled with 4kg of soil and placed on the bench in 2 rows of 10 pots each and the 3rd row of 5 pots, 1st-row pots were used to plant GM cotton seeds, while the 2nd-row pots were used for non-GM cotton seeds and the 3rd row of 5 pots served as control, all in the screen house. Soil samples for metagenomic DNA extraction were collected at random and at the monthly interval after planting at a distance of 2mm from the plant’s root and at a depth of 10cm using a sterile spatula. Soil samples for physicochemical analysis were collected before planting and after harvesting the GM and non-GM crops as well as from the control soil. The DNA was extracted, quantified and sequenced; Sample 1A (DNA from GM cotton Soil at 1st interval) gave the lowest sequence read with 0.853M while sample 2B (DNA from GM cotton Soil at 2nd interval) gave the highest with 5.785M, others gave between 1.8M and 4.7M. The samples treatment were grouped into four, Group 1 (GM cotton soil from 1 to 3 intervals) had between 800,000 and 5,700,000 strains of microbes (SOM), Group 2 (non GM cotton soil from 1 to 3 intervals) had between 1,400,600 and 4,200,000 SOM, Group 3 (control soil) had between 900,000 and 3,600,000 SOM and Group 4 (initial soil) had between 3,700,000 and 4,000,000 SOM. The microbes observed were predominantly bacteria (including archaea), fungi, dark matter alongside protists and phages. The predominant bacterial groups were the Terrabacteria (Bacillus funiculus, Bacillus sp.), the Proteobacteria (Microvirga massiliensis, sphingomonas sp.) and the Archaea (Nitrososphaera sp.), while the fungi were Aspergillus fischeri and Fusarium falciforme. The comparative analysis between groups was done using JACCARD PERMANOVA beta diversity analysis at P-value not more than 0.76 and there was no significant pair found. The pH for initial, GM cotton, non-GM cotton and control soil were 6.28, 6.26, 7.25, 8.26 and the percentage moisture was 0.63, 0.78, 0.89 and 0.82, respectively, while the percentage Nitrogen was observed to be 17.79, 1.14, 1.10 and 0.56 respectively. Other parameters include, varying concentrations of Potassium (0.46, 1,284.47, 1,785.48, 1,252.83 mg/kg) and Phosphorus (18.76, 17.76, 16.87, 15.23 mg/kg) were recorded for the four treatments respectively. The soil consisted mainly of silt (32.09 to 34.66%) and clay (58.89 to 60.23%), reflecting the soil texture as silty – clay. The results were then tested with ANOVA at less than 0.05 P-value and no pair was found to be significant as well. The results suggest that the GM crops have no significant effect on microbial ecology and physicochemical properties of the soil and, in turn, no direct or indirect effects on human health.Keywords: genetically modified crop, microbial ecology, physicochemical properties, metagenomics, DNA, soil
Procedia PDF Downloads 145440 Digitalization, Economic Growth and Financial Sector Development in Africa
Authors: Abdul Ganiyu Iddrisu
Abstract:
Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.Keywords: digitalization, economic growth, financial sector development, Africa
Procedia PDF Downloads 103439 Architecture for Hearing Impaired: A Study on Conducive Learning Environments for Deaf Children with Reference to Sri Lanka
Authors: Champa Gunawardana, Anishka Hettiarachchi
Abstract:
Conducive Architecture for learning environments is an area of interest for many scholars around the world. Loss of sense of hearing leads to the assumption that deaf students are visual learners. Comprehending favorable non-hearing attributes of architecture can lead to effective, rich and friendly learning environments for hearing impaired. The objective of the current qualitative investigation is to explore the nature and parameters of a sense of place of deaf children to support optimal learning. The investigation was conducted with hearing-impaired children (age: between 8-19, Gender: 15 male and 15 female) of Yashodhara deaf and blind school at Balangoda, Sri Lanka. A sensory ethnography study was adopted to identify the nature of perception and the parameters of most preferred and least preferred spaces of the learning environment. The common perceptions behind most preferred places in the learning environment were found as being calm and quiet, sense of freedom, volumes characterized by openness and spaciousness, sense of safety, wide spaces, privacy and belongingness, less crowded, undisturbed, availability of natural light and ventilation, sense of comfort and the view of green colour in the surroundings. On the other hand, the least preferred spaces were found to be perceived as dark, gloomy, warm, crowded, lack of freedom, smells (bad), unsafe and having glare. Perception of space by deaf considering the hierarchy of sensory modalities involved was identified as; light - color perception (34 %), sight - visual perception (32%), touch - haptic perception (26%), smell - olfactory perception (7%) and sound – auditory perception (1%) respectively. Sense of freedom (32%) and sense of comfort (23%) were the predominant psychological parameters leading to an optimal sense of place perceived by hearing impaired. Privacy (16%), rhythm (14%), belonging (9%) and safety (6%) were found as secondary factors. Open and wide flowing spaces without visual barriers, transparent doors and windows or open port holes to ease their communication, comfortable volumes, naturally ventilated spaces, natural lighting or diffused artificial lighting conditions without glare, sloping walkways, wider stairways, walkways and corridors with ample distance for signing were identified as positive characteristics of the learning environment investigated.Keywords: deaf, visual learning environment, perception, sensory ethnography
Procedia PDF Downloads 230438 Risks and Values in Adult Safeguarding: An Examination of How Social Workers Screen Safeguarding Referrals from Residential Homes
Authors: Jeremy Dixon
Abstract:
Safeguarding adults forms a core part of social work practice. The Government in England and Wales has made efforts to standardise practices through The Care Act 2014. The Act states that local authorities have duties to make inquiries in cases where an adult with care or support needs is experiencing or at risk of abuse and is unable to protect themselves from abuse or neglect. Despite the importance given to safeguarding adults within law there remains little research about how social workers conduct such decisions on the ground. This presentation reports on findings from a pilot research study conducted within two social work teams in a Local Authority in England. The objective of the project was to find out how social workers interpreted safeguarding duties as laid out by The Care Act 2014 with a particular focus on how workers assessed and managed risk. Ethnographic research methods were used throughout the project. This paper focusses specifically on decisions made by workers in the assessment team. The paper reports on qualitative observation and interviews with five workers within this team. Drawing on governmentality theory, this paper analyses the techniques used by workers to manage risk from a distance. A high proportion of safeguarding referrals came from care workers or managers in residential care homes. Social workers conducting safeguarding assessments were aware that they had a duty to work in partnership with these agencies. However, their duty to safeguard adults also meant that they needed to view them as potential abusers. In making judgments about when it was proportionate to refer for a safeguarding assessment workers drew on a number of common beliefs about residential care workers which were then tested in conversations with them. Social workers held the belief that residential homes acted defensively, leading them to report any accident or danger. Social workers therefore encouraged residential workers to consider whether statutory criteria had been met and to use their own procedures to manage risk. In addition social workers carried out an assessment of the workers’ motives; specifically whether they were using safeguarding procedures as a shortcut for avoiding other assessments or as a means of accessing extra resources. Where potential abuse was identified social workers encouraged residential homes to use disciplinary policies as a means of isolating and managing risk. The study has implications for understanding risk within social work practice. It shows that whilst social workers use law to govern individuals, these laws are interpreted against cultural values. Additionally they also draw on assumptions about the culture of others.Keywords: adult safeguarding, governmentality, risk, risk assessment
Procedia PDF Downloads 288437 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering
Authors: Hamza Benzerrouk, Alexander Nebylov
Abstract:
In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.Keywords: GNSS, INS, Kalman filtering, ultra tight integration
Procedia PDF Downloads 280436 Estimation of Rock Strength from Diamond Drilling
Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi
Abstract:
The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength
Procedia PDF Downloads 137435 Mechanical and Material Characterization on the High Nitrogen Supersaturated Tool Steels for Die-Technology
Authors: Tatsuhiko Aizawa, Hiroshi Morita
Abstract:
The tool steels such as SKD11 and SKH51 have been utilized as punch and die substrates for cold stamping, forging, and fine blanking processes. The heat-treated SKD11 punches with the hardness of 700 HV wrought well in the stamping of SPCC, normal steel plates, and non-ferrous alloy such as a brass sheet. However, they suffered from severe damage in the fine blanking process of smaller holes than 1.5 mm in diameter. Under the high aspect ratio of punch length to diameter, an elastoplastic bucking of slender punches occurred on the production line. The heat-treated punches had a risk of chipping at their edges. To be free from those damages, the blanking punch must have sufficient rigidity and strength at the same time. In the present paper, the small-hole blanking punch with a dual toughness structure was proposed to provide a solution to this engineering issue in production. The low-temperature plasma nitriding process was utilized to form the nitrogen supersaturated thick layer into the original SKD11 punch. Through the plasma nitriding at 673 K for 14.4 ks, the nitrogen supersaturated layer, with the thickness of 50 μm and without nitride precipitates, was formed as a high nitrogen steel (HNS) layer surrounding the original SKD11 punch. In this two-zone structured SKD11 punch, the surface hardness increased from 700 HV for the heat-treated SKD11 to 1400 HV. This outer high nitrogen SKD11 (HN-SKD11) layer had a homogeneous nitrogen solute depth profile with a nitrogen solute content plateau of 4 mass% till the border between the outer HN-SKD11 layer and the original SKD11 matrix. When stamping the brass sheet with the thickness of 1 mm by using this dually toughened SKD11 punch, the punch life was extended from 500 K shots to 10000 K shots to attain a much more stable production line to yield the brass American snaps. Furthermore, with the aid of the masking technique, the punch side surface layer with the thickness of 50 μm was modified by this high nitrogen super-saturation process to have a stripe structure where the un-nitrided SKD11 and the HN-SKD11 layers were alternatively aligned from the punch head to the punch bottom. This flexible structuring promoted the mechanical integrity of total rigidity and toughness as a punch with an extremely small diameter.Keywords: high nitrogen supersaturation, semi-dry cold stamping, solid solution hardening, tool steel dies, low temperature nitriding, dual toughness structure, extremely small diameter punch
Procedia PDF Downloads 88434 The Scientific Study of the Relationship Between Physicochemical and Microstructural Properties of Ultrafiltered Cheese: Protein Modification and Membrane Separation
Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh
Abstract:
The loss of curd cohesiveness and syneresis are two common problems in the ultrafiltered cheese industry. In this study, by using membrane technology and protein modification, a modified cheese was developed and its properties were compared with a control sample. In order to decrease the lactose content and adjust the protein, acidity, dry matter and milk minerals, a combination of ultrafiltration, nanofiltration and reverse osmosis technologies was employed. For protein modification, a two-stage chemical and enzymatic reaction was employed before and after ultrafiltration. The physicochemical and microstructural properties of the modified ultrafiltered cheese were compared with the control one. Results showed that the modified protein enhanced the functional properties of the final cheese significantly (pvalue< 0.05), even if the protein content was 50% lower than the control one. The modified cheese showed 21 ± 0.70, 18 ± 1.10 & 25±1.65% higher hardness, cohesiveness and water-holding capacity values, respectively, than the control sample. This behavior could be explained by the developed microstructure of the gel network. Furthermore, chemical-enzymatic modification of milk protein induced a significant change in the network parameter of the final cheese. In this way, the indices of network linkage strength, network linkage density, and time scale of junctions were 10.34 ± 0.52, 68.50 ± 2.10 & 82.21 ± 3.85% higher than the control sample, whereas the distance between adjacent linkages was 16.77 ± 1.10% lower than the control sample. These results were supported by the results of the textural analysis. A non-linear viscoelastic study showed a triangle waveform stress of the modified protein contained cheese, while the control sample showed rectangular waveform stress, which suggested a better sliceability of the modified cheese. Moreover, to study the shelf life of the products, the acidity, as well as molds and yeast population, were determined in 120 days. It’s worth mentioning that the lactose content of modified cheese was adjusted at 2.5% before fermentation, while the lactose of the control one was at 4.5%. The control sample showed 8 weeks shelf life, while the shelf life of the modified cheese was 18 weeks in the refrigerator. During 18 weeks, the acidity of modified and control samples increased from 82 ± 1.50 to 94 ± 2.20 °D and 88 ± 1.64 to 194 ± 5.10 °D, respectively. The mold and yeast populations, with time, followed the semicircular shape model (R2 = 0.92, R2adj = 0.89, RMSE = 1.25). Furthermore, the mold and yeast counts and their growth rate in the modified cheese were lower than those for control one; Aforementioned result could be explained by the shortage of the source of energy for the microorganism in the modified cheese. The lactose content of the modified sample was less than 0.2 ± 0.05% at the end of fermentation, while this was 3.7 ± 0.68% in the control sample.Keywords: non-linear viscoelastic, protein modification, semicircular shape model, ultrafiltered cheese
Procedia PDF Downloads 74433 Impact of Autoclave Sterilization of Gelatin on Endotoxin Level and Physical Properties Compared to Surfactant Purified Gelatins
Authors: Jos Olijve
Abstract:
Introduction and Purpose: Endotoxins are found in the outer membrane of gram-negative bacteria and have profound in vitro and in vivo responses. They can trigger strong immune responses and negatively affect various cellar activities particular cells expressing toll-like receptors. They are therefore unwanted contaminants of biomaterials sourced from natural raw materials, and their activity must be as low as possible. Collagen and gelatin are natural extracellular matrix components and have, due to their low allergenic potential, suitable biological properties, and tunable physical characteristics, high potential in biomedical applications. The purpose of this study was to determine the influence of autoclave sterilization of gelatin on physical properties and endotoxin level compared to surfactant purified gelatin. Methods: Type A gelatin from Sigma-Aldrich (G1890) with endotoxin level of 35000 endotoxin units (EU) per gram gelatin and type A gelatins from Rousselot Gent with endotoxin activity of 30000 EU per gram were used. A 10 w/w% G1890 gelatin solution was autoclave sterilized during 30 minutes at 121°C and 1 bar over pressure. The physical properties and the endotoxin level of the sterilized G1890 gelatin were compared to a type A gelatin from Rousselot purified with Triton X100 surfactant. The Triton X100 was added to a concentration of 0.5 w/w% which is above the critical micellar concentration. The gelatin surfactant mixtures were kept for 30-45 minutes under constant stirring at 55-60°C. The Triton X100 was removed by active carbon filtration. The endotoxin levels of the gelatins were measured using the Endozyme recombinant factor C method from Hyglos GmbH (Germany). Results and Discussion: Autoclave sterilization significantly affect the physical properties of gelatin. Molecular weight of G1890 decreased from 140 to 50kDa, and gel strength decreased from 300 to 40g. The endotoxin level of the gelatin reduced after sterilization from 35000 EU/g to levels of 400-500 EU/g. These endotoxin levels are however still far above the upper endotoxin level of 0.05 EU/ml, which resembles 5 EU/g gelatin based on a 1% gelatin solution, to avoid cell proliferation alteration. Molecular weight and gel strength of Rousselot gelatin was not altered after Triton X100 purification and remained 150kDa and 300g respectively. The endotoxin levels of Triton X100 purified Rousselot gelatin was < 5EU/g gelatin. Conclusion: Autoclave sterilization of gelatin is, in comparison to Triton X100 purification, not efficient to inactivate endotoxin levels in gelatin to levels below the upper limit to avoid cell proliferation alteration. Autoclave sterilization gave a significant decrease in molecular weight and gel strength which makes autoclave sterilized gelatin, in comparison to Triton X100 purified gelatin, not suitable for 3D printing.Keywords: endotoxin, gelatin, molecular weight, sterilization, Triton X100
Procedia PDF Downloads 233432 Influence of Controlled Retting on the Quality of the Hemp Fibres Harvested at the Seed Maturity by Using a Designed Lab-Scale Pilot Unit
Authors: Brahim Mazian, Anne Bergeret, Jean-Charles Benezet, Sandrine Bayle, Luc Malhautier
Abstract:
Hemp fibers are increasingly used as reinforcements in polymer matrix composites due to their competitive performance (low density, mechanical properties and biodegradability) compared to conventional fibres such as glass fibers. However, the huge variation of their biochemical, physical and mechanical properties limits the use of these natural fibres in structural applications when high consistency and homogeneity are required. In the hemp industry, traditional processes termed field retting are commonly used to facilitate the extraction and separation of stem fibers. This retting treatment consists to spread out the stems on the ground for a duration ranging from a few days to several weeks. Microorganisms (fungi and bacteria) grow on the stem surface and produce enzymes that degrade pectinolytic substances in the middle lamellae surrounding the fibers. This operation depends on the weather conditions and is currently carried out very empirically in the fields so that a large variability in the hemp fibers quality (mechanical properties, color, morphology, chemical composition…) is resulting. Nonetheless, if controlled, retting might be favorable for good properties of hemp fibers and then of hemp fibers reinforced composites. Therefore, the present study aims to investigate the influence of controlled retting within a designed environmental chamber (lab-scale pilot unit) on the quality of the hemp fibres harvested at the seed maturity growth stage. Various assessments were applied directly on fibers: color observations, morphological (optical microscope), surface (ESEM), biochemical (gravimetry) analysis, spectrocolorimetric measurements (pectins content), thermogravimetric analysis (TGA) and tensile testing. The results reveal that controlled retting leads to a rapid change of color from yellow to dark grey due to development of microbial communities (fungi and bacteria) at the stem surface. An increase of thermal stability of fibres due to the removal of non-cellulosic components along retting is also observed. A separation of bast fibers to elementary fibers occurred with an evolution of chemical composition (degradation of pectins) and a rapid decrease in tensile properties (380MPa to 170MPa after 3 weeks) due to accelerated retting process. The influence of controlled retting on the biocomposite material (PP / hemp fibers) properties is under investigation.Keywords: controlled retting, hemp fibre, mechanical properties, thermal stability
Procedia PDF Downloads 155431 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor
Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang
Abstract:
To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel
Procedia PDF Downloads 354430 Computational Analysis of Thermal Degradation in Wind Turbine Spars' Equipotential Bonding Subjected to Lightning Strikes
Authors: Antonio A. M. Laudani, Igor O. Golosnoy, Ole T. Thomsen
Abstract:
Rotor blades of large, modern wind turbines are highly susceptible to downward lightning strikes, as well as to triggering upward lightning; consequently, it is necessary to equip them with an effective lightning protection system (LPS) in order to avoid any damage. The performance of existing LPSs is affected by carbon fibre reinforced polymer (CFRP) structures, which lead to lightning-induced damage in the blades, e.g. via electrical sparks. A solution to prevent internal arcing would be to electrically bond the LPS and the composite structures such that to obtain the same electric potential. Nevertheless, elevated temperatures are achieved at the joint interfaces because of high contact resistance, which melts and vaporises some of the epoxy resin matrix around the bonding. The produced high-pressure gasses open up the bonding and can ignite thermal sparks. The objective of this paper is to predict the current density distribution and the temperature field in the adhesive joint cross-section, in order to check whether the resin pyrolysis temperature is achieved and any damage is expected. The finite element method has been employed to solve both the current and heat transfer problems, which are considered weakly coupled. The mathematical model for electric current includes Maxwell-Ampere equation for induced electric field solved together with current conservation, while the thermal field is found from heat diffusion equation. In this way, the current sub-model calculates Joule heat release for a chosen bonding configuration, whereas the thermal analysis allows to determining threshold values of voltage and current density not to be exceeded in order to maintain the temperature across the joint below the pyrolysis temperature, therefore preventing the occurrence of outgassing. In addition, it provides an indication of the minimal number of bonding points. It is worth to mention that the numerical procedures presented in this study can be tailored and applied to any type of joints other than adhesive ones for wind turbine blades. For instance, they can be applied for lightning protection of aerospace bolted joints. Furthermore, they can even be customized to predict the electromagnetic response under lightning strikes of other wind turbine systems, such as nacelle and hub components.Keywords: carbon fibre reinforced polymer, equipotential bonding, finite element method, FEM, lightning protection system, LPS, wind turbine blades
Procedia PDF Downloads 164429 Digitization and Economic Growth in Africa: The Role of Financial Sector Development
Authors: Abdul Ganiyu Iddrisu, Bei Chen
Abstract:
Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth and reducing poverty. Yet the compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, and low-income flows, among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector. However, empirical evidence on the digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We, therefore, argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa, focusing on the role of digitization and financial sector development. First, we assess how digitization influences financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to the private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improve economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.Keywords: digitalization, financial sector development, Africa, economic growth
Procedia PDF Downloads 140428 The Environmental Impact Assessment of Land Use Planning (Case Study: Tannery Industry in Al-Garma District)
Authors: Husam Abdulmuttaleb Hashim
Abstract:
The environmental pollution problems represent a great challenge to the world, threatening to destroy all the evolution that mankind has reached, the organizations and associations that cares about environment are trying to warn the world from the forthcoming danger resulted from excessive use of nature resources and consuming it without looking to the damage happened as a result of unfair use of it. Most of the urban centers suffers from the environmental pollution problems and health, economic, and social dangers resulted from this pollution, and while the land use planning is responsible for distributing different uses in urban centers and controlling the interactions between these uses to reach a homogeneous and perfect state for the different activities in cities, the occurrence of environmental problems in the shade of existing land use planning operation refers to the disorder or insufficiency in this operation which leads to presence of such problems, and this disorder lays in lack of sufficient importance to the environmental considerations during the land use planning operations and setting up the master plan, so the research start to study this problem and finding solutions for it, the research assumes that using accurate and scientific methods in early stages of land use planning operation will prevent occurring of environmental pollution problems in the future, the research aims to study and show the importance of the environmental impact assessment method (EIA) as an important planning tool to investigate and predict the pollution ranges of the land use that has a polluting pattern in land use planning operation. This research encompasses the concept of environmental assessment and its kinds and clarifies environmental impact assessment and its contents, the research also dealt with urban planning concept and land use planning, it also dealt with the current situation of the case study (Al-Garma district) and the land use planning in it and explain the most polluting use on the environment which is the industrial land use represented in the tannery industries and then there was a stating of current situation of this land use and explaining its contents and environmental impacts resulted from it, and then we analyzed the tests applied by the researcher for water and soil, and perform environmental evaluation through applying environmental impact assessment matrix using the direct method to reveal the pollution ranges on the ambient environment of industrial land use, and we also applied the environmental and site limits and standards by using (GIS) and (AUTOCAD) to select the site of the best alternative of the industrial region in Al-Garma district after the research approved the unsuitability of its current site location for the environmental and site limitations, the research conducted some conclusions and recommendations regard clarifying the concluded facts and to set the proper solutions.Keywords: EIA, pollution, tannery industry, land use planning
Procedia PDF Downloads 449