Search results for: General Linear Model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22296

Search results for: General Linear Model

7446 Morphological and Chemical Characterization of the Surface of Orthopedic Implant Materials

Authors: Bertalan Jillek, Péter Szabó, Judit Kopniczky, István Szabó, Balázs Patczai, Kinga Turzó

Abstract:

Hip and knee prostheses are one of the most frequently used medical implants, that can significantly improve patients’ quality of life. Long term success and biointegration of these prostheses depend on several factors, like bulk and surface characteristics, construction and biocompatibility of the material. The applied surgical technique, the general health condition and life-quality of the patient are also determinant factors. Medical devices used in orthopedic surgeries have different surfaces depending on their function inside the human body. Surface roughness of these implants determines the interaction with the surrounding tissues. Numerous modifications have been applied in the recent decades to improve a specific property of an implant. Our goal was to compare the surface characteristics of typical implant materials used in orthopedic surgery and traumatology. Morphological and chemical structure of Vortex plate anodized titanium, cemented THR (total hip replacement) stem high nitrogen REX steel (SS), uncemented THR stem and cup titanium (Ti) alloy with titanium plasma spray coating (TPS), cemented cup and uncemented acetabular liner HXL and UHMWPE and TKR (total knee replacement) femoral component CoCrMo alloy (Sanatmetal Ltd, Hungary) discs were examined. Visualization and elemental analysis were made by scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). Surface roughness was determined by atomic force microscopy (AFM) and profilometry. SEM and AFM revealed the morphological and roughness features of the examined materials. TPS Ti presented the highest Ra value (25 ± 2 μm, followed by CoCrMo alloy (535 ± 19 nm), Ti (227 ± 15 nm) and stainless steel (170 ± 11 nm). The roughness of the HXL and UHMWPE surfaces was in the same range, 147 ± 13 nm and 144 ± 15 nm, respectively. EDS confirmed typical elements on the investigated prosthesis materials: Vortex plate Ti (Ti, O, P); TPS Ti (Ti, O, Al); SS (Fe, Cr, Ni, C) CoCrMo (Co, Cr, Mo), HXL (C, Al, Ni) and UHMWPE (C, Al). The results indicate that the surface of prosthesis materials have significantly different features and the applied investigation methods are suitable for their characterization. Contact angle measurements and in vitro cell culture testing are further planned to test their surface energy characteristics and biocompatibility.

Keywords: morphology, PE, roughness, titanium

Procedia PDF Downloads 112
7445 Total Quality Management in Algerian Manufacturing

Authors: Nadia Fatima Zahra Malki

Abstract:

The aim of the study is to show the role of total Quality Management on firm performance, research relied on the views of a sample managers working in the Marinel pharmaceutical company. The research aims to achieve many objectives, including increasing awareness of the concepts of Total Quality Management on Firm Performance, especially in the manufacturing firm, providing a future vision of the possibility of success, and the actual application of the Principles of Total Quality Management in the manufacturing company. The research adopted a default model was built after a review and analysis of the literature review in the context of one hypothesis's main points at the origin of a group of sub-hypotheses. The research presented a set of conclusions, and the most important of these conclusions was that there is a relationship between the Principles of TQM and Firm Performance.

Keywords: total quality management, competitive advantage, companies, objectives

Procedia PDF Downloads 36
7444 Prediction of Pile-Raft Responses Induced by Adjacent Braced Excavation in Layered Soil

Authors: Linlong Mu, Maosong Huang

Abstract:

Considering excavations in urban areas, the soil deformation induced by the excavations usually causes damage to the surrounding structures. Displacement control becomes a critical indicator of foundation design in order to protect the surrounding structures. Evaluation, the damage potential of the surrounding structures induced by the excavations, usually depends on the finite element method (FEM) because of the complexity of the excavation and the variety of the surrounding structures. Besides, evaluation the influence of the excavation on surrounding structures is a three-dimensional problem. And it is now well recognized that small strain behaviour of the soil influences the responses of the excavation significantly. Three-dimensional FEM considering small strain behaviour of the soil is a very complex method, which is hard for engineers to use. Thus, it is important to obtain a simplified method for engineers to predict the influence of the excavations on the surrounding structures. Based on large-scale finite element calculation with small-strain based soil model coupling with inverse analysis, an empirical method is proposed to calculate the three-dimensional soil movement induced by braced excavation. The empirical method is able to capture the small-strain behaviour of the soil. And it is suitable to be used in layered soil. Then the free-field soil movement is applied to the pile to calculate the responses of the pile in both vertical and horizontal directions. The asymmetric solutions for problems in layered elastic half-space are employed to solve the interactions between soil points. Both vertical and horizontal pile responses are solved through finite difference method based on elastic theory. Interactions among the nodes along a single pile, pile-pile interactions, pile-soil-pile interaction action and soil-soil interactions are counted to improve the calculation accuracy of the method. For passive piles, the shadow effects are also calculated in the method. Finally, the restrictions of the raft on the piles and the soils are summarized as: (1) the summations of the internal forces between the elements of the raft and the elements of the foundation, including piles and soil surface elements, is equal to 0; (2) the deformations of pile heads or of the soil surface elements are the same as the deformations of the corresponding elements of the raft. Validations are carried out by comparing the results from the proposed method with the results from the model tests, FEM and other existing literatures. From the comparisons, it can be seen that the results from the proposed method fit with the results from other methods very well. The method proposed herein is suitable to predict the responses of the pile-raft foundation induced by braced excavation in layered soil in both vertical and horizontal directions when the deformation is small. However, more data is needed to verify the method before it can be used in practice.

Keywords: excavation, pile-raft foundation, passive piles, deformation control, soil movement

Procedia PDF Downloads 216
7443 Handy EKG: Low-Cost ECG For Primary Care Screening In Developing Countries

Authors: Jhiamluka Zservando Solano Velasquez, Raul Palma, Alejandro Calderon, Servio Paguada, Erick Marin, Kellyn Funes, Hana Sandoval, Oscar Hernandez

Abstract:

Background: Screening cardiac conditions in primary care in developing countries can be challenging, and Honduras is not the exception. One of the main limitations is the underfunding of the Healthcare System in general, causing conventional ECG acquisition to become a secondary priority. Objective: Development of a low-cost ECG to improve screening of arrhythmias in primary care and communication with a specialist in secondary and tertiary care. Methods: Design a portable, pocket-size low-cost 3 lead ECG (Handy EKG). The device is autonomous and has Wi-Fi/Bluetooth connectivity options. A mobile app was designed which can access online servers with machine learning, a subset of artificial intelligence to learn from the data and aid clinicians in their interpretation of readings. Additionally, the device would use the online servers to transfer patient’s data and readings to a specialist in secondary and tertiary care. 50 randomized patients volunteer to participate to test the device. The patients had no previous cardiac-related conditions, and readings were taken. One reading was performed with the conventional ECG and 3 readings with the Handy EKG using different lead positions. This project was possible thanks to the funding provided by the National Autonomous University of Honduras. Results: Preliminary results show that the Handy EKG performs readings of the cardiac activity similar to those of a conventional electrocardiograph in lead I, II, and III depending on the position of the leads at a lower cost. The wave and segment duration, amplitude, and morphology of the readings were similar to the conventional ECG, and interpretation was possible to conclude whether there was an arrhythmia or not. Two cases of prolonged PR segment were found in both ECG device readings. Conclusion: Using a Frugal innovation approach can allow lower income countries to develop innovative medical devices such as the Handy EKG to fulfill unmet needs at lower prices without compromising effectiveness, safety, and quality. The Handy EKG provides a solution for primary care screening at a much lower cost and allows for convenient storage of the readings in online servers where clinical data of patients can then be accessed remotely by Cardiology specialists.

Keywords: low-cost hardware, portable electrocardiograph, prototype, remote healthcare

Procedia PDF Downloads 166
7442 A Study of Parameters That Have an Influence on Fabric Prints in Judging the Attractiveness of a Female Body Shape

Authors: Man N. M. Cheung

Abstract:

In judging the attractiveness of female body shape, visual sense is one of the important means. The ratio and proportion of body shape influence the perception of female physical attractiveness. This study aims to examine visual perception of digital textile prints on a virtual 3D model in judging the attractiveness of the body shape. Also, investigate the influences when using different shape parameters and their relationships. Participants were asked to conduct a set of questionnaires with images to rank the attractiveness of the female body shape. Results showed that morphing the fabric prints with a certain ratio and combination of shape parameters - waist and hip, can enhance the attractiveness of the female body shape.

Keywords: digital printing, 3D body modeling, fashion print design, body shape attractiveness

Procedia PDF Downloads 163
7441 The Safety Related Functions of The Engineered Barriers of the IAEA Borehole Disposal System: The Ghana Pilot Project

Authors: Paul Essel, Eric T. Glover, Gustav Gbeddy, Yaw Adjei-Kyereme, Abdallah M. A. Dawood, Evans M. Ameho, Emmanuel A. Aberikae

Abstract:

Radioactive materials mainly in the form of Sealed Radioactive Sources are being used in various sectors (medicine, agriculture, industry, research, and teaching) for the socio-economic development of Ghana. The use of these beneficial radioactive materials has resulted in an inventory of Disused Sealed Radioactive Sources (DSRS) in storage. Most of the DSRS are legacy/historic sources which cannot be returned to their manufacturer or country of origin. Though small in volume, DSRS can be intensively radioactive and create a significant safety and security liability. They need to be managed in a safe and secure manner in accordance with the fundamental safety objective. The Radioactive Waste Management Center (RWMC) of the Ghana Atomic Energy Commission (GAEC) is currently storing a significant volume of DSRS. The initial activities of the DSRS range from 7.4E+5 Bq to 6.85E+14 Bq. If not managed properly, such DSRS can represent a potential hazard to human health and the environment. Storage is an important interim step, especially for DSRS containing very short-lived radionuclides, which can decay to exemption levels within a few years. Long-term storage, however, is considered an unsustainable option for DSRS with long half-lives hence the need for a disposal facility. The GAEC intends to use the International Atomic Energy Agency’s (IAEA’s) Borehole Disposal System (BDS) to provide a safe, secure, and cost-effective disposal option to dispose of its DSRS in storage. The proposed site for implementation of the BDS is on the GAEC premises at Kwabenya. The site has been characterized to gain a general understanding in terms of its regional setting, its past evolution and likely future natural evolution over the assessment time frame. Due to the long half-lives of some of the radionuclides to be disposed of (Ra-226 with half-life of 1600 years), the engineered barriers of the system must be robust to contain these radionuclides for this long period before they decay to harmless levels. There is the need to assess the safety related functions of the engineered barriers of this disposal system.

Keywords: radionuclides, disposal, radioactive waste, engineered barrier

Procedia PDF Downloads 51
7440 Avoidance and Selectivity in the Acquisition of Arabic as a Second/Foreign Language

Authors: Abeer Heider

Abstract:

This paper explores and classifies the different kinds of avoidances that students commonly make in the acquisition of Arabic as a second/foreign language, and suggests specific strategies to help students lessen their avoidance trends in hopes of streamlining the learning process. Students most commonly use avoidance strategies in grammar, and word choice. These different types of strategies have different implications and naturally require different approaches. Thus the question remains as to the most effective way to help students improve their Arabic, and how teachers can efficiently utilize these techniques. It is hoped that this research will contribute to understand the role of avoidance in the field of the second language acquisition in general, and as a type of input. Yet some researchers also note that similarity between L1 and L2 may be problematic as well since the learner may doubt that such similarity indeed exists and consequently avoid the identical constructions or elements (Jordens, 1977; Kellermann, 1977, 1978, 1986). In an effort to resolve this issue, a case study is being conducted. The present case study attempts to provide a broader analysis of what is acquired than is usually the case, analyzing the learners ‘accomplishments in terms of three –part framework of the components of communicative competence suggested by Michele Canale: grammatical competence, sociolinguistic competence and discourse competence. The subjects of this study are 15 students’ 22th year who came to study Arabic at Qatar University of Cairo. The 15 students are in the advanced level. They were complete intermediate level in Arabic when they arrive in Qatar for the first time. The study used discourse analytic method to examine how the first language affects students’ production and output in the second language, and how and when students use avoidance methods in their learning. The study will be conducted through Fall 2015 through analyzing audio recordings that are recorded throughout the entire semester. The recordings will be around 30 clips. The students are using supplementary listening and speaking materials. The group will be tested at the end of the term to assess any measurable difference between the techniques. Questionnaires will be administered to teachers and students before and after the semester to assess any change in attitude toward avoidance and selectivity methods. Responses to these questionnaires are analyzed and discussed to assess the relative merits of the aforementioned strategies to avoidance and selectivity to further support on. Implications and recommendations for teacher training are proposed.

Keywords: the second language acquisition, learning languages, selectivity, avoidance

Procedia PDF Downloads 268
7439 Use of Cassava Waste and Its Energy Potential

Authors: I. Inuaeyen, L. Phil, O. Eni

Abstract:

Fossil fuels have been the main source of global energy for many decades, accounting for about 80% of global energy need. This is beginning to change however with increasing concern about greenhouse gas emissions which comes mostly from fossil fuel combustion. Greenhouse gases such as carbon dioxide are responsible for stimulating climate change. As a result, there has been shift towards more clean and renewable energy sources of energy as a strategy for stemming greenhouse gas emission into the atmosphere. The production of bio-products such as bio-fuel, bio-electricity, bio-chemicals, and bio-heat etc. using biomass materials in accordance with the bio-refinery concept holds a great potential for reducing high dependence on fossil fuel and their resources. The bio-refinery concept promotes efficient utilisation of biomass material for the simultaneous production of a variety of products in order to minimize or eliminate waste materials. This will ultimately reduce greenhouse gas emissions into the environment. In Nigeria, cassava solid waste from cassava processing facilities has been identified as a vital feedstock for bio-refinery process. Cassava is generally a staple food in Nigeria and one of the most widely cultivated foodstuff by farmers across Nigeria. As a result, there is an abundant supply of cassava waste in Nigeria. In this study, the aim is to explore opportunities for converting cassava waste to a range of bio-products such as butanol, ethanol, electricity, heat, methanol, furfural etc. using a combination of biochemical, thermochemical and chemical conversion routes. . The best process scenario will be identified through the evaluation of economic analysis, energy efficiency, life cycle analysis and social impact. The study will be carried out by developing a model representing different process options for cassava waste conversion to useful products. The model will be developed using Aspen Plus process simulation software. Process economic analysis will be done using Aspen Icarus software. So far, comprehensive survey of literature has been conducted. This includes studies on conversion of cassava solid waste to a variety of bio-products using different conversion techniques, cassava waste production in Nigeria, modelling and simulation of waste conversion to useful products among others. Also, statistical distribution of cassava solid waste production in Nigeria has been established and key literatures with useful parameters for developing different cassava waste conversion process has been identified. In the future work, detailed modelling of the different process scenarios will be carried out and the models validated using data from literature and demonstration plants. A techno-economic comparison of the various process scenarios will be carried out to identify the best scenario using process economics, life cycle analysis, energy efficiency and social impact as the performance indexes.

Keywords: bio-refinery, cassava waste, energy, process modelling

Procedia PDF Downloads 354
7438 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms

Authors: Dimitrios Kafetzopoulos

Abstract:

Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.

Keywords: incremental operational changes, radical operational changes, efficiency, sustainability

Procedia PDF Downloads 123
7437 Choosing between the Regression Correlation, the Rank Correlation, and the Correlation Curve

Authors: Roger L. Goodwin

Abstract:

This paper presents a rank correlation curve. The traditional correlation coefficient is valid for both continuous variables and for integer variables using rank statistics. Since the correlation coefficient has already been established in rank statistics by Spearman, such a calculation can be extended to the correlation curve. This paper presents two survey questions. The survey collected non-continuous variables. We will show weak to moderate correlation. Obviously, one question has a negative effect on the other. A review of the qualitative literature can answer which question and why. The rank correlation curve shows which collection of responses has a positive slope and which collection of responses has a negative slope. Such information is unavailable from the flat, "first-glance" correlation statistics.

Keywords: Bayesian estimation, regression model, rank statistics, correlation, correlation curve

Procedia PDF Downloads 452
7436 Genetic Analysis of Iron, Phosphorus, Potassium and Zinc Concentration in Peanut

Authors: Ajay B. C., Meena H. N., Dagla M. C., Narendra Kumar, Makwana A. D., Bera S. K., Kalariya K. A., Singh A. L.

Abstract:

The high-energy value, protein content and minerals makes peanut a rich source of nutrition at comparatively low cost. Basic information on genetics and inheritance of these mineral elements is very scarce. Hence, in the present study inheritance (using additive-dominance model) and association of mineral elements was studied in two peanut crosses. Dominance variance (H) played an important role in the inheritance of P, K, Fe and Zn in peanut pods. Average degree of dominance for most of the traits was greater than unity indicating over dominance for these traits. Significant associations were also observed among mineral elements both in F2 and F3 generations but pod yield had no associations with mineral elements (with few exceptions). Di-allele/bi-parental mating could be followed to identify high yielding and mineral dense segregates.

Keywords: correlation, dominance variance, mineral elements, peanut

Procedia PDF Downloads 401
7435 Development and Validation of a Rapid Turbidimetric Assay to Determine the Potency of Cefepime Hydrochloride in Powder Injectable Solution

Authors: Danilo F. Rodrigues, Hérida Regina N. Salgado

Abstract:

Introduction: The emergence of resistant microorganisms to a large number of clinically approved antimicrobials has been increasing, which restrict the options for the treatment of bacterial infections. As a strategy, drugs with high antimicrobial activities are in evidence. Stands out a class of antimicrobial, the cephalosporins, having as fourth generation cefepime (CEF) a semi-synthetic product which has activity against various Gram-positive bacteria (e.g. oxacillin resistant Staphylococcus aureus) and Gram-negative (e.g. Pseudomonas aeruginosa) aerobic. There are few studies in the literature regarding the development of microbiological methodologies for the analysis of this antimicrobial, so researches in this area are highly relevant to optimize the analysis of this drug in the industry and ensure the quality of the marketed product. The development of microbiological methods for the analysis of antimicrobials has gained strength in recent years and has been highlighted in relation to physicochemical methods, especially because they make possible to determine the bioactivity of the drug against a microorganism. In this context, the aim of this work was the development and validation of a microbiological method for quantitative analysis of CEF in powder lyophilized for injectable solution by turbidimetric assay. Method: For performing the method, Staphylococcus aureus ATCC 6538 IAL 2082 was used as the test microorganism and the culture medium chosen was the Casoy broth. The test was performed using temperature control (35.0 °C ± 2.0 °C) and incubated for 4 hours in shaker. The readings of the results were made at a wavelength of 530 nm through a spectrophotometer. The turbidimetric microbiological method was validated by determining the following parameters: linearity, precision (repeatability and intermediate precision), accuracy and robustness, according to ICH guidelines. Results and discussion: Among the parameters evaluated for method validation, the linearity showed results suitable for both statistical analyses as the correlation coefficients (r) that went 0.9990 for CEF reference standard and 0.9997 for CEF sample. The precision presented the following values 1.86% (intraday), 0.84% (interday) and 0.71% (between analyst). The accuracy of the method has been proven through the recovery test where the mean value obtained was 99.92%. The robustness was verified by the parameters changing volume of culture medium, brand of culture medium, incubation time in shaker and wavelength. The potency of CEF present in the samples of lyophilized powder for injectable solution was 102.46%. Conclusion: The turbidimetric microbiological method proposed for quantification of CEF in lyophilized powder for solution for injectable showed being fast, linear, precise, accurate and robust, being in accordance with all the requirements, which can be used in routine analysis of quality control in the pharmaceutical industry as an option for microbiological analysis.

Keywords: cefepime hydrochloride, quality control, turbidimetric assay, validation

Procedia PDF Downloads 346
7434 Effect of Particle Size Variations on the Tribological Properties of Porcelain Waste Added Epoxy Composites

Authors: B. Yaman, G. Acikbas, N. Calis Acikbas

Abstract:

Epoxy based materials have advantages in tribological applications due to their unique properties such as light weight, self-lubrication capacity and wear resistance. On the other hand, their usage is often limited by their low load bearing capacity and low thermal conductivity values. In this study, it is aimed to improve tribological and also mechanical properties of epoxy by reinforcing with ceramic based porcelain waste. It is well-known that the reuse or recycling of waste materials leads to reduction in production costs, ease of manufacturing, saving energy, etc. From this perspective, epoxy and epoxy matrix composites containing 60wt% porcelain waste with different particle size in the range of below 90µm and 150-250µm were fabricated, and the effect of filler particle size on the mechanical and tribological properties was investigated. The microstructural characterization was carried out by scanning electron microscopy (SEM), and phase analysis was determined by X-ray diffraction (XRD). The Archimedes principle was used to measure the density and porosity of the samples. The hardness values were measured using Shore-D hardness, and bending tests were performed. Microstructural investigations indicated that porcelain particles were homogeneously distributed and no agglomerations were encountered in the epoxy resin. Mechanical test results showed that the hardness and bending strength were increased with increasing particle size related to low porosity content and well embedding to the matrix. Tribological behavior of these composites was evaluated in terms of friction, wear rates and wear mechanisms by ball-on-disk contact with dry and rotational sliding at room temperature against WC ball with a diameter of 3mm. Wear tests were carried out at room temperature (23–25°C) with a humidity of 40 ± 5% under dry-sliding conditions. The contact radius of cycles was set to 5 mm at linear speed of 30 cm/s for the geometry used in this study. In all the experiments, 3N of constant test load was applied at a frequency of 8 Hz and prolonged to 400m wear distance. The friction coefficient of samples was recorded online by the variation in the tangential force. The steady-state CoFs were changed in between 0,29-0,32. The dimensions of the wear tracks (depth and width) were measured as two-dimensional profiles by a stylus profilometer. The wear volumes were calculated by integrating these 2D surface areas over the diameter. Specific wear rates were computed by dividing the wear volume by the applied load and sliding distance. According to the experimental results, the use of porcelain waste in the fabrication of epoxy resin composites can be suggested to be potential materials due to allowing improved mechanical and tribological properties and also providing reduction in production cost.

Keywords: epoxy composites, mechanical properties, porcelain waste, tribological properties

Procedia PDF Downloads 184
7433 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria

Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale

Abstract:

The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.

Keywords: allocative efficiency, DEA, Tobit regression, tuber crop

Procedia PDF Downloads 273
7432 The Efficacy of Albendazole against Soil-Transmitted Helminths and the Impact of Mass Drug Administration of Albendazole and Ivermectin on Health Status

Authors: Mike Yaw Osei-Atweneboana, John Asiedu Larbi, Edward Jenner Tettevi

Abstract:

Background: The lymphatic filariasis (LF) control programme has been on-going in Ghana since 2000. This community-wide approach involves the use of ivermectin (IVM) and albendazole (ALB). Soil-transmitted helminth (STH) infections control is augmented within this programme; however, in areas where LF is not prevalent, albendazole alone is administered to school children. The purpose of this study was therefore, to determine the efficacy of albendazole against soils transmitted helminths and the impact of mass drug administration of albendazole and ivermectin on the health status of children of school going age and pregnant women. Material/Methods: This was a twelve months longitudinal study. A total of 412 subjects including school children (between the ages of 2-17 years) and pregnant women were randomly selected from four endemic communities in Kpandai district of the Northern region. Coprological assessment for parasites was based on the Kato–Katz technique in both dry and rainy seasons at baseline, 21 days and 3 months post-treatment. Single-dose albendazole treatment was administered to all patients at baseline. Preserved samples are currently under molecular studies to identify possible single nucleotide polymorphism (SNP) within the beta tubulin gene which is associated with benzimidazole resistance. Results: Of all the parasites found (hookworm, Trichuris trichiura, Hymenolepis nana, and Taenia sp.); hookworm was the most prevalent. In the dry season, the overall STHs prevalence at pre-treatment was 29%, while 9% and 13% prevalence was recorded at 21 days, and three months after treatment respectively. However, in the rainy season, the overall STHs prevalence was 8%, while 4% and 12% was recorded at 21 days and three months respectively after ALB treatment. In general, ALB treatment resulted in an overall hookworm egg count reduction rate of 89% in the dry season and 93% in the rainy season, while the T. trichiura egg count reduction rate was 100% in both seasons. Conclusions: STH infections still remains a significant public health burden in Ghana. Hookworm infection seems to respond poorly or sub-optimally to ALB, raising concerns of possible emergence of resistance which may lead to a major setback for the control and elimination of STH infections, especially hookworm infections.

Keywords: hookworm, sub-optimal response, albendazole, trichuriasis, soil-transmitted helminths

Procedia PDF Downloads 278
7431 Using Corpora in Semantic Studies of English Adjectives

Authors: Oxana Lukoshus

Abstract:

The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.

Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies

Procedia PDF Downloads 303
7430 Empirical Acceleration Functions and Fuzzy Information

Authors: Muhammad Shafiq

Abstract:

In accelerated life testing approaches life time data is obtained under various conditions which are considered more severe than usual condition. Classical techniques are based on obtained precise measurements, and used to model variation among the observations. In fact, there are two types of uncertainty in data: variation among the observations and the fuzziness. Analysis techniques, which do not consider fuzziness and are only based on precise life time observations, lead to pseudo results. This study was aimed to examine the behavior of empirical acceleration functions using fuzzy lifetimes data. The results showed an increased fuzziness in the transformed life times as compare to the input data.

Keywords: acceleration function, accelerated life testing, fuzzy number, non-precise data

Procedia PDF Downloads 282
7429 Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique

Authors: Chalakorn Chitsaart, Suchada Rianmora, Noppawat Vongpiyasatit

Abstract:

In order to reduce the transportation time and cost for direct interface between customer and manufacturer, the image processing technique has been introduced in this research where designing part and defining manufacturing process can be performed quickly. A3D virtual model is directly generated from a series of multi-view images of an object, and it can be modified, analyzed, and improved the structure, or function for the further implementations, such as computer-aided manufacturing (CAM). To estimate and quote the production cost, the user-friendly platform has been developed in this research where the appropriate manufacturing parameters and process detections have been identified and planned by CAM simulation.

Keywords: image processing technique, feature detections, surface registrations, capturing multi-view images, Production costs and Manufacturing processes

Procedia PDF Downloads 236
7428 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 402
7427 Management of Meskit (Prosopis juliflora) Tree in Oman: The Case of Using Meskit (Prosopis juliflora) Pods for Feeding Omani Sheep

Authors: S. Al-Khalasi, O. Mahgoub, H. Yaakub

Abstract:

This study evaluated the use of raw or processed Prosopis juliflora (Meskit) pods as a major ingredient in a formulated ration to provide an alternative non-conventional concentrate for livestock feeding in Oman. Dry Meskit pods were reduced to lengths of 0.5- 1.0 cm to ensure thorough mixing into three diets. Meskit pods were subjected to two types of treatments; roasting and soaking. They were roasted at 150оC for 30 minutes using a locally-made roasting device (40 kg barrel container rotated by electric motor and heated by flame gas cooker). Chopped pods were soaked in tap water for 24 hours and dried for 2 days under the sun with frequent turning. The Meskit-pod-based diets (MPBD) were formulated and pelleted from 500 g/kg ground Meskit pods, 240 g/kg wheat bran, 200 g/kg barley grain, 50 g/kg local dried sardines and 10 g/kg of salt. Twenty four 10 months-old intact Omani male lambs with average body weight of 27.3 kg (± 0.5 kg) were used in a feeding trial for 84 days. They were divided (on body weight basis) and allocated to four diet combination groups. These were: Rhodes grass hay (RGH) plus a general ruminant concentrate (GRC); RGH plus raw Meskit pods (RMP) based concentrate; RGH plus roasted Meskit pods (ROMP) based concentrate; RGH plus soaked Meskit pods (SMP) based concentrate Daily feed intakes and bi-weekly body weights were recorded. MPBD had higher contents of crude protein (CP), acid detergent fibre (ADF) and neutral detergent fibre (NDF) than the GRC. Animals fed various types of MPBD did not show signs of ill health. There was a significant effect of feeding ROMP on the performance of Omani sheep compared to RMP and SMP. The ROMP fed animals had similar performance to those fed the GRC in terms of feed intake, body weight gain and feed conversion ratio (FCR).This study indicated that roasted Meskit pods based diet may be used instead of the commercial concentrate for feeding Omani sheep without adverse effects on performance. It offers a cheap alternative source of protein and energy for feeding Omani sheep. Also, it might help in solving the spread impact of Meskit trees, maintain the ecosystem and helping in preserving the local tree species.

Keywords: growth, Meskit, Omani sheep, Prosopis juliflora

Procedia PDF Downloads 459
7426 Cavitating Flow through a Venturi Using Computational Fluid Dynamics

Authors: Imane Benghalia, Mohammed Zamoum, Rachid Boucetta

Abstract:

Hydrodynamic cavitation is a complex physical phenomenon that appears in hydraulic systems (pumps, turbines, valves, Venturi tubes, etc.) when the fluid pressure decreases below the saturated vapor pressure. The works carried out in this study aimed to get a better understanding of the cavitating flow phenomena. For this, we have numerically studied a cavitating bubbly flow through a Venturi nozzle. The cavitation model is selected and solved using a commercial computational fluid dynamics (CFD) code. The obtained results show the effect of the inlet pressure (10, 7, 5, and 2 bars) of the Venturi on pressure, the velocity of the fluid flow, and the vapor fraction. We found that the inlet pressure of the Venturi strongly affects the evolution of the pressure, velocity, and vapor fraction formation in the cavitating flow.

Keywords: cavitating flow, CFD, phase change, venturi

Procedia PDF Downloads 71
7425 An ICF Framework for Game-Based Experiences in Geriatric Care

Authors: Marlene Rosa, Susana Lopes

Abstract:

Board games have been used for different purposes in geriatric care, demonstrating good results for health in general. However, there is not a conceptual framework that can help professionals and researchers in this area to design intervention programs or to think about future studies in this area. The aim of this study was to provide a pilot collection of board games’ serious purposes in geriatric care, using a WHO framework for health and disability. Study cases were developed in seven geriatric residential institutions from the center region in Portugal that are included in AGILAB program. The AGILAB program is a serious game-based method to train and spread out the implementation of board games in geriatric care. Each institution provides 2-hours/week of experiences using TATI Hand Game for serious purposes and then fulfill questions about a study-case (player characteristics; explain changes in players health according to this game experience). Two independent researchers read the information and classified it according to the International Classification for Functioning and Disability (ICF) categories. Any discrepancy was solved in a consensus meeting. Results indicate an important variability in body functions and structures: specific mental functions (e.g., b140 Attention functions, b144 Memory functions), b156 Perceptual functions, b2 sensory functions and pain (e.g., b230 Hearing functions; b265 Touch function; b280 Sensation of pain), b7 neuromusculoskeletal and movement-related functions (e.g., b730 Muscle power functions; b760 Control of voluntary movement functions; b710 Mobility of joint functions). Less variability was found in activities and participation domains, such as purposeful sensory experiences (d110-d129) (e.g., d115 Listening), communication (d3), d710 basic interpersonal interactions, d920 recreation and leisure (d9200 Play; d9205 Socializing). Concluding, this framework designed from a brief gamed-based experience includes mental, perceptual, sensory, neuromusculoskeletal, and movement-related functions and participation in sensory, communication, and leisure domains. More studies, including different experiences and a high number of users, should be developed to provide a more comprehensive ICF framework for game-based experiences in geriatric care.

Keywords: board game, aging, framework, experience

Procedia PDF Downloads 115
7424 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 150
7423 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 115
7422 Recognising Patients’ Perspective on Health Behaviour Problems Through Laughter: Implications for Patient-Centered Care Practice in Behaviour Change Consultations in General Practice

Authors: Binh Thanh Ta, Elizabeth Sturgiss

Abstract:

Central to patient-centered care is the idea of treating a patient as a person and understanding their perspectives regarding their health conditions and care preferences. Surprisingly, little is known about how GPs can understand their patients’ perspectives. This paper addresses the challenge of understanding patient perspectives in behavior change consultations by adopting Conversation Analysis (CA), which is an empirical research approach that allows both researchers and the audience to examine patients’ perspectives as displayed in GP-patient interaction. To understand people’s perspectives, CA researchers do not rely on what they say but instead on how they demonstrate their endogenous orientations to social norms when they interact with each other. Underlying CA is the notion that social interaction is orderly by all means. (It is important to note that social orders should not be treated as exogenous sets of rules that predetermine human behaviors. Rather social orders are constructed and oriented by social members through their interactional practices. Also, note that these interactional practices are the resources shared by all social members). As CA offers tools to uncover the orderliness of interactional practices, it not only allows us to understand the perspective of a particular patient in a particular medical encounter but, more importantly, enables us to recognise the shared interactional practice for signifying a particular perspective. Drawing on the 10 video-recorded consultations on behavior change in primary care, we have discovered the orderliness of patient laughter when reporting health behaviors, which signifies their orientation to the problematic nature of the reported behaviors. Among 24 cases where patients reported their health behaviors, we found 19 cases in which they laughed while speaking. In the five cases where patients did not laugh, we found that they explicitly framed their behavior as unproblematic. This finding echoes the CA body research on laughter, which suggests that laughter produced by first speakers (as opposed to laughing in response to what has been said earlier) normally indicates some sort of problems oriented to the self (e.g. self-tease, self-depreciation, etc.). This finding points to the significance of understanding when and why patients laugh; such understanding would assist GPs to recognise whether patients treat their behavior as problematic or not, thereby producing responses sensitive to patient perspectives.

Keywords: patient centered care, laughter, conversation analysis, primary care, behaviour change consultations

Procedia PDF Downloads 91
7421 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses

Authors: Matthew Baucum

Abstract:

With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.

Keywords: FMRI, machine learning, meta-analysis, text analysis

Procedia PDF Downloads 434
7420 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 43
7419 Impact of Different Rearing Diets on the Performance of Adult Mealworms Tenebrio molitor

Authors: Caroline Provost, Francois Dumont

Abstract:

Production of insects for human and animal consumption is an increasingly important activity in Canada. Protein production is more efficient and less harmful to the environment using insect rearing compared to the impact of traditional livestock, poultry and fish farms. Insects are rich in essential amino acids, essential fatty acids and trace elements. Thus, insect-based products could be used as a food supplement for livestock and domestic animals and may even find their way into the diets of high performing athletes or fine dining. Nevertheless, several parameters remain to be determined to ensure efficient and profitable production that meet the potential of these sectors. This project proposes to improve the production processes, rearing diets and processing methods for three species with valuable gastronomic and nutritional potential: the common mealworms (Tenebrio molitor), the small mealworm (Alphitobius diaperinus), and the giant mealworm (Zophobas morio). The general objective of the project is to acquire specific knowledge for mass rearing of insects dedicated to animal and human consumption in order to respond to current market opportunities and meet a growing demand for these products. Mass rearing of the three species of mealworm was produced to provide the individuals needed for the experiments. Mealworms eat flour from different cereals (e.g. wheat, barley, buckwheat). These cereals vary in their composition (protein, carbohydrates, fiber, vitamins, antioxidant, etc.), but also in their purchase cost. Seven different diets were compared to optimize the yield of the rearing. Diets were composed of cereal flour (e.g. wheat, barley) and were either mixed or left alone. Female fecundity, larvae mortality and growing curves were observed. Some flour diets have positive effects on female fecundity and larvae performance while each mealworm was found to have specific diet requirements. Trade-offs between mealworm performance and costs need to be considered. Experiments on the effect of flour composition on several parameters related to performance and nutritional and gastronomic value led to the identification of a more appropriate diet for each mealworm.

Keywords: mass rearing, mealworm, human consumption, diet

Procedia PDF Downloads 134
7418 Optimization of Maintenance of PV Module Arrays Based on Asset Management Strategies: Case of Study

Authors: L. Alejandro Cárdenas, Fernando Herrera, David Nova, Juan Ballesteros

Abstract:

This paper presents a methodology to optimize the maintenance of grid-connected photovoltaic systems, considering the cleaning and module replacement periods based on an asset management strategy. The methodology is based on the analysis of the energy production of the PV plant, the energy feed-in tariff, and the cost of cleaning and replacement of the PV modules, with the overall revenue received being the optimization variable. The methodology is evaluated as a case study of a 5.6 kWp solar PV plant located on the Bogotá campus of the Universidad Nacional de Colombia. The asset management strategy implemented consists of assessing the PV modules through visual inspection, energy performance analysis, pollution, and degradation. Within the visual inspection of the plant, the general condition of the modules and the structure is assessed, identifying dust deposition, visible fractures, and water accumulation on the bottom. The energy performance analysis is performed with the energy production reported by the monitoring systems and compared with the values estimated in the simulation. The pollution analysis is performed using the soiling rate due to dust accumulation, which can be modelled by a black box with an exponential function dependent on historical pollution values. The pollution rate is calculated with data collected from the energy generated during two years in a photovoltaic plant on the campus of the National University of Colombia. Additionally, the alternative of assessing the temperature degradation of the PV modules is evaluated by estimating the cell temperature with parameters such as ambient temperature and wind speed. The medium-term energy decrease of the PV modules is assessed with the asset management strategy by calculating the health index to determine the replacement period of the modules due to degradation. This study proposes a tool for decision making related to the maintenance of photovoltaic systems. The above, projecting the increase in the installation of solar photovoltaic systems in power systems associated with the commitments made in the Paris Agreement for the reduction of CO2 emissions. In the Colombian context, it is estimated that by 2030, 12% of the installed power capacity will be solar PV.

Keywords: asset management, PV module, optimization, maintenance

Procedia PDF Downloads 22
7417 GIS Technology for Environmentally Polluted Sites with Innovative Process to Improve the Quality and Assesses the Environmental Impact Assessment (EIA)

Authors: Hamad Almebayedh, Chuxia Lin, Yu wang

Abstract:

The environmental impact assessment (EIA) must be improved, assessed, and quality checked for human and environmental health and safety. Soil contamination is expanding, and sites and soil remediation activities proceeding around the word which simplifies the answer “quality soil characterization” will lead to “quality EIA” to illuminate the contamination level and extent and reveal the unknown for the way forward to remediate, countifying, containing, minimizing and eliminating the environmental damage. Spatial interpolation methods play a significant role in decision making, planning remediation strategies, environmental management, and risk assessment, as it provides essential elements towards site characterization, which need to be informed into the EIA. The Innovative 3D soil mapping and soil characterization technology presented in this research paper reveal the unknown information and the extent of the contaminated soil in specific and enhance soil characterization information in general which will be reflected in improving the information provided in developing the EIA related to specific sites. The foremost aims of this research paper are to present novel 3D mapping technology to quality and cost-effectively characterize and estimate the distribution of key soil characteristics in contaminated sites and develop Innovative process/procedure “assessment measures” for EIA quality and assessment. The contaminated site and field investigation was conducted by innovative 3D mapping technology to characterize the composition of petroleum hydrocarbons contaminated soils in a decommissioned oilfield waste pit in Kuwait. The results show the depth and extent of the contamination, which has been interred into a developed assessment process and procedure for the EIA quality review checklist to enhance the EIA and drive remediation and risk assessment strategies. We have concluded that to minimize the possible adverse environmental impacts on the investigated site in Kuwait, the soil-capping approach may be sufficient and may represent a cost-effective management option as the environmental risk from the contaminated soils is considered to be relatively low. This research paper adopts a multi-method approach involving reviewing the existing literature related to the research area, case studies, and computer simulation.

Keywords: quality EIA, spatial interpolation, soil characterization, contaminated site

Procedia PDF Downloads 72