Search results for: material processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9963

Search results for: material processing

1593 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis

Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio

Abstract:

Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.

Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction

Procedia PDF Downloads 309
1592 Difficulties and Mistakes in Diagnosis During Brucellosis in Children

Authors: Taghi-Zada T. G., Hajiyeva U. K.

Abstract:

Recent years, due to the development of tourism, migration and globalization, brucellosis has spread to non-endemic regions of the country in Azerbaijan and this disease has become one of the main priority areas of medicine. In our daily practice, we face patients with specific symptoms of brucellosis and also infected with this disease but misdiagnosed. It should also be noted that the symptoms and signs of brucellosis are very diverse, and since none of these signs are specific enough to confirm the diagnosis, it creates difficulties in its timely detection and diagnosis. The main purpose of the work. Therefore, the main goal of the work is to investigate the cases of delay in making the correct diagnosis in children with brucellosis and the mistakes in this matter. Material and method. 50 children with brucellosis between the ages of 6 months and 17 years were examined. The medical history and anamnesis of these children were collected, clinical-instrumental examination, and serological tests for brucellosis were performed. Patients were divided into 2 groups, taking into account the specificity of symptoms and the timely diagnosis Results. Group I included 15 (40%) children aged 3-17 years. The main specific symptoms of brucellosis in these patients; persistent or long-term fever, night sweats, arthralgia were observed. In addition to specific symptoms, anamnesis and a specific serological test confirmed the diagnosis of brucellosis. 30 (60%) patients included in group II were misdiagnosed. 3 patients (up to 1 year) were diagnosed with sepsis, 6 with acute rheumatic fever, 10 with systemic diseases, 2 with tuberculosis, 5 with Covid 19, and 4 with unspecified fever. However, we included serological tests. detailed examination revealed the presence of brucellosis in them. As can be seen, compared to group I (40%) children included in group II (60%) In modern times, brucellosis manifests itself with its own characteristics, that is, imitating a number of other diseases, which has led to wrong diagnosis. Conclusion. Thus, the lack of specificity of clinical symptoms during brucellosis in children makes diagnosis difficult, causes mistakes and non-recognition of the disease. With this in mind, physicians in predominantly endemic and even sub-endemic areas should remain vigilant about this disease and consider brucellosis in the differential diagnosis of almost every unexplained medical problem until proven otherwise.

Keywords: brucellosis, pediatrics, diagnostics, serological tests

Procedia PDF Downloads 12
1591 Multi-Stakeholder Involvement in Construction and Challenges of Building Information Modeling Implementation

Authors: Zeynep Yazicioglu

Abstract:

Project development is a complex process where many stakeholders work together. Employers and main contractors are the base stakeholders, whereas designers, engineers, sub-contractors, suppliers, supervisors, and consultants are other stakeholders. A combination of the complexity of the building process with a large number of stakeholders often leads to time and cost overruns and irregular resource utilization. Failure to comply with the work schedule and inefficient use of resources in the construction processes indicate that it is necessary to accelerate production and increase productivity. The development of computer software called Building Information Modeling, abbreviated as BIM, is a major technological breakthrough in this area. The use of BIM enables architectural, structural, mechanical, and electrical projects to be drawn in coordination. BIM is a tool that should be considered by every stakeholder with the opportunities it offers, such as minimizing construction errors, reducing construction time, forecasting, and determination of the final construction cost. It is a process spreading over the years, enabling all stakeholders associated with the project and construction to use it. The main goal of this paper is to explore the problems associated with the adoption of BIM in multi-stakeholder projects. The paper is a conceptual study, summarizing the author’s practical experience with design offices and construction firms working with BIM. In the transition period to BIM, three of the challenges will be examined in this paper: 1. The compatibility of supplier companies with BIM, 2. The need for two-dimensional drawings, 3. Contractual issues related to BIM. The paper reviews the literature on BIM usage and reviews the challenges in the transition stage to BIM. Even on an international scale, the supplier that can work in harmony with BIM is not very common, which means that BIM's transition is continuing. In parallel, employers, local approval authorities, and material suppliers still need a 2-D drawing. In the BIM environment, different stakeholders can work on the same project simultaneously, giving rise to design ownership issues. Practical applications and problems encountered are also discussed, providing a number of suggestions for the future.

Keywords: BIM opportunities, collaboration, contract issues about BIM, stakeholders of project

Procedia PDF Downloads 102
1590 Spectral Response Measurements and Materials Analysis of Ageing Solar Photovoltaic Modules

Authors: T. H. Huang, C. Y. Gao, C. H. Lin, J. L. Kwo, Y. K. Tseng

Abstract:

The design and reliability of solar photovoltaic modules are crucial to the development of solar energy, and efforts are still being made to extend the life of photovoltaic modules to improve their efficiency because natural aging is time-consuming and does not provide manufacturers and investors with timely information, accelerated aging is currently the best way to estimate the life of photovoltaic modules. In this study, the accelerated aging of different light sources was combined with spectral response measurements to understand the effect of light sources on aging tests. In this study, there are two types of experimental samples: packaged and unpackaged and then irradiated with full-spectrum and UVC light sources for accelerated aging, as well as a control group without aging. The full-spectrum aging was performed by irradiating the solar cell with a xenon lamp like the solar spectrum for two weeks, while the accelerated aging was performed by irradiating the solar cell with a UVC lamp for two weeks. The samples were first visually observed, and infrared thermal images were taken, and then the electrical (IV) and Spectral Responsivity (SR) data were obtained by measuring the spectral response of the samples, followed by Scanning Electron Microscopy (SEM), Raman spectroscopy (Raman), and X-ray Diffraction (XRD) analysis. The results of electrical (IV) and Spectral Responsivity (SR) and material analyses were used to compare the differences between packaged and unpackaged solar cells with full spectral aging, accelerated UVC aging, and unaged solar cells. The main objective of this study is to compare the difference in the aging of packaged and unpackaged solar cells by irradiating different light sources. We determined by infrared thermal imaging that both full-spectrum aging and UVC accelerated aging increase the defects of solar cells, and IV measurements demonstrated that the conversion efficiency of solar cells decreases after full-spectrum aging and UVC accelerated aging. SEM observed some scorch marks on both unpackaged UVC accelerated aging solar cells and unpackaged full-spectrum aging solar cells. Raman spectroscopy examines the Si intensity of solar cells, and XRD confirms the crystallinity of solar cells by the intensity of Si and Ag winding peaks.

Keywords: solar cell, aging, spectral response measurement

Procedia PDF Downloads 102
1589 Measuring Fluctuating Asymmetry in Human Faces Using High-Density 3D Surface Scans

Authors: O. Ekrami, P. Claes, S. Van Dongen

Abstract:

Fluctuating asymmetry (FA) has been studied for many years as an indicator of developmental stability or ‘genetic quality’ based on the assumption that perfect symmetry is ideally the expected outcome for a bilateral organism. Further studies have also investigated the possible link between FA and attractiveness or levels of masculinity or femininity. These hypotheses have been mostly examined using 2D images, and the structure of interest is usually presented using a limited number of landmarks. Such methods have the downside of simplifying and reducing the dimensionality of the structure, which will in return increase the error of the analysis. In an attempt to reach more conclusive and accurate results, in this study we have used high-resolution 3D scans of human faces and have developed an algorithm to measure and localize FA, taking a spatially-dense approach. A symmetric spatially dense anthropometric mask with paired vertices is non-rigidly mapped on target faces using an Iterative Closest Point (ICP) registration algorithm. A set of 19 manually indicated landmarks were used to examine the precision of our mapping step. The protocol’s accuracy in measurement and localizing FA is assessed using simulated faces with known amounts of asymmetry added to them. The results of validation of our approach show that the algorithm is perfectly capable of locating and measuring FA in 3D simulated faces. With the use of such algorithm, the additional captured information on asymmetry can be used to improve the studies of FA as an indicator of fitness or attractiveness. This algorithm can especially be of great benefit in studies of high number of subjects due to its automated and time-efficient nature. Additionally, taking a spatially dense approach provides us with information about the locality of FA, which is impossible to obtain using conventional methods. It also enables us to analyze the asymmetry of a morphological structures in a multivariate manner; This can be achieved by using methods such as Principal Components Analysis (PCA) or Factor Analysis, which can be a step towards understanding the underlying processes of asymmetry. This method can also be used in combination with genome wide association studies to help unravel the genetic bases of FA. To conclude, we introduced an algorithm to study and analyze asymmetry in human faces, with the possibility of extending the application to other morphological structures, in an automated, accurate and multi-variate framework.

Keywords: developmental stability, fluctuating asymmetry, morphometrics, 3D image processing

Procedia PDF Downloads 140
1588 Gene Expressions in Left Ventricle Heart Tissue of Rat after 150 Mev Proton Irradiation

Authors: R. Fardid, R. Coppes

Abstract:

Introduction: In mediastinal radiotherapy and to a lesser extend also in total-body irradiation (TBI) radiation exposure may lead to development of cardiac diseases. Radiation-induced heart disease is dose-dependent and it is characterized by a loss of cardiac function, associated with progressive heart cells degeneration. We aimed to determine the in-vivo radiation effects on fibronectin, ColaA1, ColaA2, galectin and TGFb1 gene expression levels in left ventricle heart tissues of rats after irradiation. Material and method: Four non-treatment adult Wistar rats as control group (group A) were selected. In group B, 4 adult Wistar rats irradiated to 20 Gy single dose of 150 Mev proton beam locally in heart only. In heart plus lung irradiate group (group C) 4 adult rats was irradiated by 50% of lung laterally plus heart radiation that mentioned in before group. At 8 weeks after radiation animals sacrificed and left ventricle heart dropped in liquid nitrogen for RNA extraction by Absolutely RNA® Miniprep Kit (Stratagen, Cat no. 400800). cDNA was synthesized using M-MLV reverse transcriptase (Life Technologies, Cat no. 28025-013). We used Bio-Rad machine (Bio Rad iQ5 Real Time PCR) for QPCR testing by relative standard curve method. Results: We found that gene expression of fibronectin in group C significantly increased compared to control group, but it was not showed significant change in group B compared to group A. The levels of gene expressions of Cola1 and Cola2 in mRNA did not show any significant changes between normal and radiation groups. Changes of expression of galectin target significantly increased only in group C compared to group A. TGFb1 expressions in group C more than group B showed significant enhancement compared to group A. Conclusion: In summary we can say that 20 Gy of proton exposure of heart tissue may lead to detectable damages in heart cells and may distribute function of them as a component of heart tissue structure in molecular level.

Keywords: gene expression, heart damage, proton irradiation, radiotherapy

Procedia PDF Downloads 489
1587 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India

Authors: Disha Bhanot, Vinish Kathuria

Abstract:

This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.

Keywords: distress sale, horticulture, income loss, India, price uncertainity

Procedia PDF Downloads 243
1586 Intensifying Approach for Separation of Bio-Butanol Using Ionic Liquid as Green Solvent: Moving Towards Sustainable Biorefinery

Authors: Kailas L. Wasewar

Abstract:

Biobutanol has been considered as a potential and alternative biofuel relative to the most popular biodiesel and bioethanol. End product toxicity is the major problems in commercialization of fermentation based process which can be reduce to some possible extent by removing biobutanol simultaneously. Several techniques have been investigated for removing butanol from fermentation broth such as stripping, adsorption, liquid–liquid extraction, pervaporation, and membrane solvent extraction. Liquid–liquid extraction can be performed with high selectivity and is possible to carry out inside the fermenter. Conventional solvents have few drawbacks including toxicity, loss of solvent, high cost etc. Hence alternative solvents must be explored for the same. Room temperature ionic liquids (RTILs) composed entirely of ions are liquid at room temperature having negligible vapor pressure, non-flammability, and tunable physiochemical properties for a particular application which term them as “designer solvents”. Ionic liquids (ILs) have recently gained much attention as alternatives for organic solvents in many processes. In particular, ILs have been used as alternative solvents for liquid–liquid extraction. Their negligible vapor pressure allows the extracted products to be separated from ILs by conventional low pressure distillation with the potential for saving energy. Morpholinium, imidazolium, ammonium, phosphonium etc. based ionic liquids have been employed for the separation biobutanol. In present chapter, basic concepts of ionic liquids and application in separation have been presented. Further, type of ionic liquids including, conventional, functionalized, polymeric, supported membrane, and other ionic liquids have been explored. Also the effect of various performance parameters on separation of biobutanol by ionic liquids have been discussed and compared for different cation and anion based ionic liquids. The typical methodology for investigation have been adopted such as contacting the equal amount of biobutanol and ionic liquids for a specific time say, 30 minutes to confirm the equilibrium. Further, biobutanol phase were analyzed using GC to know the concentration of biobutanol and material balance were used to find the concentration in ionic liquid.

Keywords: biobutanol, separation, ionic liquids, sustainability, biorefinery, waste biomass

Procedia PDF Downloads 91
1585 Investigation of the Prevalence, Phenotypes, and Risk Factors Associated with Demodex Infestation and Its Relationship with Acne

Authors: Sina Alimohammadi, Mahnaz Banihashemi, Maryam Poursharif

Abstract:

Demodex is a mandatory parasite of pilosebaceous. D. folliculorum lives as a single parasite or as a number of parasites in hair follicles, and D. brevis as a single parasite living in sebaceous glands. Transmission of Demodex from one person to another requires direct skin contact; it also has a greater density in the forehead, cheeks, nose, and nasolabial folds. Demodex can cause some clinical symptoms such as follicular pityriasis, rosacea-like demodicosis, postural folliculitis, papules, seborrheic dermatitis, blepharitis, dermatitis around the lips, and hyperpigmented spots. In this study, the prevalence of Demodex species in patients referred to the dermatology department of Sayad Shirazi Hospital Gorgan, Iran, in the years 2019-2020 was investigated. Material and Methods: The study population consisted of 242 samples taken from the people referred to the dermatology department of Sayad Shirazi Hospital during the years 2019-2020, which were sampled by adhesive tape. All of the participants completed the questionnaires. The samples were examined microscopically for the presence of Demodex. Results: Out of 242 participants, 67 (27.68%) were infected with Demodex. Most cases of infection were observed in the group of 21 to 30 years (28 people; 11.57%) and then in the group of 31 to 40 years (21 people; 8.67%). Also, in the group of people under 10 years and over 60 years, no positive cases (0%) of Demodex were observed in microscopic examinations. Out of 11 variables, there was a statistically significant difference in relation to the three variables of age (P = 0.000003), use of cleansing solutions (P = 0.002), and the presence of acne (P = 0.0013). Conclusion: According to the results of this study, it was found that the incidence of Demodex in one group of acne patients is higher than in others, which emphasizes the possible role of Demodex in the pathogenesis of acne. In this study, there was an inverse relationship between the incidence of Demodex and the use of skin cleansing solutions. Also, the prevalence of Demodex is higher in the group of 20-30 years, and its prevalence does not increase with age. Due to the possibility of drug resistance in the future, regular studies on genotyping and drug resistance are recommended.

Keywords: acne, demodex, mite, prevalence

Procedia PDF Downloads 89
1584 The Role of Zakat on Sustainable Economic Development by Rumah Zakat

Authors: Selamat Muliadi

Abstract:

This study aimed to explain conceptual the role of Zakat on sustainable economic development by Rumah Zakat. Rumah Zakat is a philanthropic institution that manages zakat and other social funds through community empowerment programs. In running the program, including economic empowerment and socio health services are designed for these recipients. Rumah Zakat's connection with the establisment of Sustainable Development Goals (SDGs) which is to help impoverished recipients economically and socially. It’s an important agenda that the government input into national development, even the region. The primary goal of Zakat on sustainable economic development, not only limited to economic variables but based on Islamic principles, has comprehensive characteristics. The characteristics include moral, material, spiritual, and social aspects. In other words, sustainable economic development is closely related to improving people’s living standard (Mustahiq). The findings provide empiricial evidence regarding the positive contribution and effectiveness of zakat targeting in reducing poverty and improve the welfare of people related with the management of zakat. The purpose of this study was to identify the role of Zakat on sustainable economic development, which was applied by Rumah Zakat. This study used descriptive method and qualitative analysis. The data source was secondary data collected from documents and texts related to the research topic, be it books, articles, newspapers, journals, or others. The results showed that the role of zakat on sustainable economic development by Rumah Zakat has been quite good and in accordance with the principle of Islamic economics. Rumah Zakat programs are adapted to support intended development. The contribution of the productive program implementation has been aligned with four goals in the Sustainable Development Goals, i.e., Senyum Juara (Quality Education), Senyum Lestari (Clean Water and Sanitation), Senyum Mandiri (Entrepreneur Program) and Senyum Sehat (Free Maternity Clinic). The performance of zakat in the sustainable economic empowerment community at Rumah Zakat is taking into account dimensions such as input, process, output, and outcome.

Keywords: Zakat, social welfare, sustainable economic development, charity

Procedia PDF Downloads 136
1583 Novel Hole-Bar Standard Design and Inter-Comparison for Geometric Errors Identification on Machine-Tool

Authors: F. Viprey, H. Nouira, S. Lavernhe, C. Tournier

Abstract:

Manufacturing of freeform parts may be achieved on 5-axis machine tools currently considered as a common means of production. In particular, the geometrical quality of the freeform parts depends on the accuracy of the multi-axis structural loop, which is composed of several component assemblies maintaining the relative positioning between the tool and the workpiece. Therefore, to reach high quality of the geometries of the freeform parts the geometric errors of the 5 axis machine should be evaluated and compensated, which leads one to master the deviations between the tool and the workpiece (volumetric accuracy). In this study, a novel hole-bar design was developed and used for the characterization of the geometric errors of a RRTTT 5-axis machine tool. The hole-bar standard design is made of Invar material, selected since it is less sensitive to thermal drift. The proposed design allows once to extract 3 intrinsic parameters: one linear positioning and two straightnesses. These parameters can be obtained by measuring the cylindricity of 12 holes (bores) and 11 cylinders located on a perpendicular plane. By mathematical analysis, twelve 3D points coordinates can be identified and correspond to the intersection of each hole axis with the least square plane passing through two perpendicular neighbour cylinders axes. The hole-bar was calibrated using a precision CMM at LNE traceable the SI meter definition. The reversal technique was applied in order to separate the error forms of the hole bar from the motion errors of the mechanical guiding systems. An inter-comparison was additionally conducted between four NMIs (National Metrology Institutes) within the EMRP IND62: JRP-TIM project. Afterwards, the hole-bar was integrated in RRTTT 5-axis machine tool to identify its volumetric errors. Measurements were carried out in real time and combine raw data acquired by the Renishaw RMP600 touch probe and the linear and rotary encoders. The geometric errors of the 5 axis machine were also evaluated by an accurate laser tracer interferometer system. The results were compared to those obtained with the hole bar.

Keywords: volumetric errors, CMM, 3D hole-bar, inter-comparison

Procedia PDF Downloads 384
1582 Recommendations for Teaching Word Formation for Students of Linguistics Using Computer Terminology as an Example

Authors: Svetlana Kostrubina, Anastasia Prokopeva

Abstract:

This research presents a comprehensive study of the word formation processes in computer terminology within English and Russian languages and provides listeners with a system of exercises for training these skills. The originality is that this study focuses on a comparative approach, which shows both general patterns and specific features of English and Russian computer terms word formation. The key point is the system of exercises development for training computer terminology based on Bloom’s taxonomy. Data contain 486 units (228 English terms from the Glossary of Computer Terms and 258 Russian terms from the Terminological Dictionary-Reference Book). The objective is to identify the main affixation models in the English and Russian computer terms formation and to develop exercises. To achieve this goal, the authors employed Bloom’s Taxonomy as a methodological framework to create a systematic exercise program aimed at enhancing students’ cognitive skills in analyzing, applying, and evaluating computer terms. The exercises are appropriate for various levels of learning, from basic recall of definitions to higher-order thinking skills, such as synthesizing new terms and critically assessing their usage in different contexts. Methodology also includes: a method of scientific and theoretical analysis for systematization of linguistic concepts and clarification of the conceptual and terminological apparatus; a method of nominative and derivative analysis for identifying word-formation types; a method of word-formation analysis for organizing linguistic units; a classification method for determining structural types of abbreviations applicable to the field of computer communication; a quantitative analysis technique for determining the productivity of methods for forming abbreviations of computer vocabulary based on the English and Russian computer terms, as well as a technique of tabular data processing for a visual presentation of the results obtained. a technique of interlingua comparison for identifying common and different features of abbreviations of computer terms in the Russian and English languages. The research shows that affixation retains its productivity in the English and Russian computer terms formation. Bloom’s taxonomy allows us to plan a training program and predict the effectiveness of the compiled program based on the assessment of the teaching methods used.

Keywords: word formation, affixation, computer terms, Bloom's taxonomy

Procedia PDF Downloads 13
1581 Milling Simulations with a 3-DOF Flexible Planar Robot

Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden

Abstract:

Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.

Keywords: control, milling, multibody, robotic, simulation

Procedia PDF Downloads 249
1580 Development of Three-Dimensional Groundwater Model for Al-Corridor Well Field, Amman–Zarqa Basin

Authors: Moayyad Shawaqfah, Ibtehal Alqdah, Amjad Adaileh

Abstract:

Coridoor area (400 km2) lies to the north – east of Amman (60 km). It lies between 285-305 E longitude and 165-185 N latitude (according to Palestine Grid). It been subjected to exploitation of groundwater from new eleven wells since the 1999 with a total discharge of 11 MCM in addition to the previous discharge rate from the well field 14.7 MCM. Consequently, the aquifer balance is disturbed and a major decline in water level. Therefore, suitable groundwater resources management is required to overcome the problems of over pumping and its effect on groundwater quality. Three–dimensional groundwater flow model Processing Modeflow for Windows Pro (PMWIN PRO, 2003) has been used in order to calculate the groundwater budget, aquifer characteristics, and to predict the aquifer response under different stresses for the next 20 years (2035). The model was calibrated for steady state conditions by trial and error calibration. The calibration was performed by matching observed and calculated initial heads for year 2001. Drawdown data for period 2001-2010 were used to calibrate transient model by matching calculated with observed one, after that, the transient model was validated by using the drawdown data for the period 2011-2014. The hydraulic conductivities of the Basalt- A7/B2 aquifer System are ranging between 1.0 and 8.0 m/day. The low conductivity value was found at the north-west and south-western parts of the study area, the high conductivity value was found at north-western corner of the study area and the average storage coefficient is about 0.025. The water balance for the Basalt and B2/A7 formation at steady state condition with a discrepancy of 0.003%. The major inflows come from Jebal Al Arab through the basalt and through the limestone aquifer (B2/A7 12.28 MCMY aquifer and from excess rainfall is about 0.68 MCM/a. While the major outflows from the Basalt-B2/A7 aquifer system are toward Azraq basin with about 5.03 MCMY and leakage to A1/6 aquitard with 7.89 MCMY. Four scenarios have been performed to predict aquifer system responses under different conditions. Scenario no.2 was found to be the best one which indicates that the reduction the abstraction rates by 50% of current withdrawal rate (25.08 MCMY) to 12.54 MCMY. The maximum drawdowns were decreased to reach about, 7.67 and 8.38m in the years 2025 and 2035 respectively.

Keywords: Amman/Zarqa Basin, Jordan, groundwater management, groundwater modeling, modflow

Procedia PDF Downloads 216
1579 Utilizing Topic Modelling for Assessing Mhealth App’s Risks to Users’ Health before and during the COVID-19 Pandemic

Authors: Pedro Augusto Da Silva E Souza Miranda, Niloofar Jalali, Shweta Mistry

Abstract:

BACKGROUND: Software developers utilize automated solutions to scrape users’ reviews to extract meaningful knowledge to identify problems (e.g., bugs, compatibility issues) and possible enhancements (e.g., users’ requests) to their solutions. However, most of these solutions do not consider the health risk aspects to users. Recent works have shed light on the importance of including health risk considerations in the development cycle of mHealth apps to prevent harm to its users. PROBLEM: The COVID-19 Pandemic in Canada (and World) is currently forcing physical distancing upon the general population. This new lifestyle made the usage of mHealth applications more essential than ever, with a projected market forecast of 332 billion dollars by 2025. However, this new insurgency in mHealth usage comes with possible risks to users’ health due to mHealth apps problems (e.g., wrong insulin dosage indication due to a UI error). OBJECTIVE: These works aim to raise awareness amongst mHealth developers of the importance of considering risks to users’ health within their development lifecycle. Moreover, this work also aims to help mHealth developers with a Proof-of-Concept (POC) solution to understand, process, and identify possible health risks to users of mHealth apps based on users’ reviews. METHODS: We conducted a mixed-method study design. We developed a crawler to mine the negative reviews from two samples of mHealth apps (my fitness, medisafe) from the Google Play store users. For each mHealth app, we performed the following steps: • The reviews are divided into two groups, before starting the COVID-19 (reviews’ submission date before 15 Feb 2019) and during the COVID-19 (reviews’ submission date starts from 16 Feb 2019 till Dec 2020). For each period, the Latent Dirichlet Allocation (LDA) topic model was used to identify the different clusters of reviews based on similar topics of review The topics before and during COVID-19 are compared, and the significant difference in frequency and severity of similar topics are identified. RESULTS: We successfully scraped, filtered, processed, and identified health-related topics in both qualitative and quantitative approaches. The results demonstrated the similarity between topics before and during the COVID-19.

Keywords: natural language processing (NLP), topic modeling, mHealth, COVID-19, software engineering, telemedicine, health risks

Procedia PDF Downloads 130
1578 Effect of Different Factors on Temperature Profile and Performance of an Air Bubbling Fluidized Bed Gasifier for Rice Husk Gasification

Authors: Dharminder Singh, Sanjeev Yadav, Pravakar Mohanty

Abstract:

In this work, study of temperature profile in a pilot scale air bubbling fluidized bed (ABFB) gasifier for rice husk gasification was carried out. Effects of different factors such as multiple cyclones, gas cooling system, ventilate gas pipe length, and catalyst on temperature profile was examined. ABFB gasifier used in this study had two sections, one is bed section and the other is freeboard section. River sand was used as bed material with air as gasification agent, and conventional charcoal as start-up heating medium in this gasifier. Temperature of different point in both sections of ABFB gasifier was recorded at different ER value and ER value was changed by changing the feed rate of biomass (rice husk) and by keeping the air flow rate constant for long durational of gasifier operation. ABFB with double cyclone with gas coolant system and with short length ventilate gas pipe was found out to be optimal gasifier design to give temperature profile required for high gasification performance in long duration operation. This optimal design was tested with different ER values and it was found that ER of 0.33 was most favourable for long duration operation (8 hr continuous operation), giving highest carbon conversion efficiency. At optimal ER of 0.33, bed temperature was found to be stable at 700 °C, above bed temperature was found to be at 628.63 °C, bottom of freeboard temperature was found to be at 600 °C, top of freeboard temperature was found to be at 517.5 °C, gas temperature was found to be at 195 °C, and flame temperature was found to be 676 °C. Temperature at all the points showed fluctuations of 10 – 20 °C. Effect of catalyst i.e. dolomite (20% with sand bed) was also examined on temperature profile, and it was found that at optimal ER of 0.33, the bed temperature got increased to 795 °C, above bed temperature got decreased to 523 °C, bottom of freeboard temperature got decreased to 548 °C, top of freeboard got decreased to 475 °C, gas temperature got decreased to 220 °C, and flame temperature got increased to 703 °C. Increase in bed temperature leads to higher flame temperature due to presence of more hydrocarbons generated from more tar cracking at higher temperature. It was also found that the use of dolomite with sand bed eliminated the agglomeration in the reactor at such high bed temperature (795 °C).

Keywords: air bubbling fluidized bed gasifier, bed temperature, charcoal heating, dolomite, flame temperature, rice husk

Procedia PDF Downloads 278
1577 Amrita Bose-Einstein Condensate Solution Formed by Gold Nanoparticles Laser Fusion and Atmospheric Water Generation

Authors: Montree Bunruanses, Preecha Yupapin

Abstract:

In this work, the quantum material called Amrita (elixir) is made from top-down gold into nanometer particles by fusing 99% gold with a laser and mixing it with drinking water using the atmospheric water (AWG) production system, which is made of water with air. The high energy laser power destroyed the four natural force bindings from gravity-weak-electromagnetic and strong coupling forces, where finally it was the purified Bose-Einstein condensate (BEC) states. With this method, gold atoms in the form of spherical single crystals with a diameter of 30-50 nanometers are obtained and used. They were modulated (activated) with a frequency generator into various matrix structures mixed with AWG water to be used in the upstream conversion (quantum reversible) process, which can be applied on humans both internally or externally by drinking or applying on the treated surfaces. Doing both space (body) and time (mind) will go back to the origin and start again from the coupling of space-time on both sides of time at fusion (strong coupling force) and push out (Big Bang) at the equilibrium point (singularity) occurs as strings and DNA with neutrinos as coupling energy. There is no distortion (purification), which is the point where time and space have not yet been determined, and there is infinite energy. Therefore, the upstream conversion is performed. It is reforming DNA to make it be purified. The use of Amrita is a method used for people who cannot meditate (quantum meditation). Various cases were applied, where the results show that the Amrita can make the body and the mind return to their pure origins and begin the downstream process with the Big Bang movement, quantum communication in all dimensions, DNA reformation, frequency filtering, crystal body forming, broadband quantum communication networks, black hole forming, quantum consciousness, body and mind healing, etc.

Keywords: quantum materials, quantum meditation, quantum reversible, Bose-Einstein condensate

Procedia PDF Downloads 76
1576 Embodied Spirituality in Gestalt Therapy

Authors: Silvia Alaimo

Abstract:

This lecture brings to our attention the theme of spirituality within Gestalt therapy’s theoretical and clinical perspectives and which is closely connected to the fertile emptiness and creative indifference’ experiences. First of all, the premise that must be done is the overcoming traditional western culture’s philosophical and religious misunderstandings, such as the dicotomy between spirituality and pratical/material daily life, as well as the widespread secular perspective of classic psychology. Even fullness and emptiness have traditionally been associated with the concepts of being and not being. "There is only one way through which we can contact the deepest layers of our existence, rejuvenate our thinking and reach intuition (the harmony of thought and being): inner silence" (Perls) *. Therefore, "fertile void" doesn't mean empty in itself, but rather an useful condition of every creative and responsible act, making room for a deeper dimension close to spirituality. Spirituality concerns questions about the meaning of existence, which lays beyond the concrete and literal dimension, looking for the essence of things, and looking at the value of personal experience. Looking at fundamentals of Gestalt epistemology, phenomenology, aesthetics, and the relationship, we can reach the heart of a therapeutic work that takes spiritual contours and which are based on an embodied (incarnate size), through the relational aesthetic knowledge (Spagnuolo Lobb ), the deep contact with each other, the role of compassion and responsibility, as the patient's recognition criteria (Orange, 2013) rooted in the body. The aesthetic dimension, like the spiritual dimension to which it is often associated, is a subtle dimension: it is the dimension of the essence of things, of their "soul." In clinical practice, it implies that the relationship between therapist and patient is "in the absence of judgment," also called "zero point of creative indifference," expressed by ‘therapeutic mentality’. It consists in following with interest and authentic curiosity where the patient wants to go and support him in his intentionality of contact. It’s a condition of pure and simple awareness, of the full acceptance of "what is," a moment of detachment from one's own life in which one does not take oneself too seriously, a starting point for finding a center of balance and integration that brings to the creative act, to growth, and, as Perls would say, to the excitement and adventure of living.

Keywords: spirituality, bodily, embodied aesthetics, phenomenology, relationship

Procedia PDF Downloads 137
1575 Nanotechnology in Construction as a Building Security

Authors: Hanan Fayez Hussein

Abstract:

‘Due to increasing environmental challenges and security problems in the world such as global warming, storms, and terrorism’, humans have discovered new technologies and new materials in order to program daily life. As providing physical and psychological security is one of the primary functions of architecture, so in order to provide security, building must prevents unauthorized entry and harm to occupant and reduce the threat of attack by making building less attractive targets by new technologies such as; Nanotechnology, which has emerged as a major science and technology focus of the 21st century and will be the next industrial revolution. Nanotechnology is control of the properties of matter, and it deals with structures of the size 100 nanometers or smaller in at least one dimension and has wide application in various fields. The construction and architecture sectors were among the first to be identified as a promising application area for nanotechnology. The advantages of using nanomaterials in construction are enormous, and promises heighten building security by utilizing the strength of building materials to make our buildings more secure and get smart home. Access barriers such as wall and windows could incorporate stronger materials benefiting from nano-reinforcement utilizing nanotubes and nano composites to act as protective cover. Carbon nanotubes, as one of nanotechnology application, can be designed up to 250 times stronger than steel. Nano-enabled devices and materials offer both enhanced and, in some cases, completely new defence systems. In the addition, the small amount of carbon nanoparticles to the construction materials such as; cement, concrete, wood, glass, gypson, and steel can make these materials act as defence elements. This paper highlights the fact that nanotechnology can impact the future global security and how building’s envelop can act as a defensive cover for the building and can be resistance to any threats can attack it. Then focus on its effect on construction materials such as; Concrete can obtain by nanoadditives excellent mechanical, chemical, and physical properties with less material, which can acts as a precautionary shield to the building.

Keywords: nanomaterial, global warming, building security, smart homes

Procedia PDF Downloads 82
1574 The Political Economy of Green Trade in the Context of US-China Trade War: A Case Study of US Biofuels and Soybeans

Authors: Tonghua Li

Abstract:

Under the neoliberal corporate food regime, biofuels are a double-edged sword that exacerbates tensions between national food security and trade in green agricultural products. Biofuels have the potential to help achieve green sustainable development goals, but they threaten food security by exacerbating competition for land and changing global food trade patterns. The U.S.-China trade war complicates this debate. Under the influence of different political and corporate coordination mechanisms in China and the US, trade disputes can have different impacts on sustainable agricultural practices. This paper develops an actor-centred ‘network governance framework’ focusing on trade in soybean and corn-based biofuels to explain how trade wars can change the actions of governmental and non-governmental actors in the context of oligopolistic competition and market concentration in agricultural trade. There is evidence that the US-China trade decoupling exacerbates the conflict between national security, free trade in agriculture, and the realities and needs of green and sustainable energy development. The US government's trade policies reflect concerns about China's relative gains, leading to a loss of trade profits, making it impossible for the parties involved to find a balance between the three objectives and, consequently, to get into a biofuels and soybean industry dilemma. Within the setting of prioritizing national security and strategic interests, the government has replaced the dominant position of large agribusiness in the neoliberal food system, and the goal of environmental sustainability has been marginalized by high politics. In contrast, China faces tensions in the trade war between food security self-sufficiency policy and liberal sustainable trade, but the state-capitalist model ensures policy coordination and coherence in trade diversion and supply chain adjustment. Despite ongoing raw material shortages and technological challenges, China remains committed to playing a role in global environmental governance and promoting green trade objectives.

Keywords: food security, green trade, biofuels, soybeans, US-China trade war

Procedia PDF Downloads 7
1573 The Mapping of Pastoral Area as a Basis of Ecological for Beef Cattle in Pinrang Regency, South Sulawesi, Indonesia

Authors: Jasmal A. Syamsu, Muhammad Yusuf, Hikmah M. Ali, Mawardi A. Asja, Zulkharnaim

Abstract:

This study was conducted and aimed in identifying and mapping the pasture as an ecological base of beef cattle. A survey was carried out during a period of April to June 2016, in Suppa, Mattirobulu, the district of Pinrang, South Sulawesi province. The mapping process of grazing area was conducted in several stages; inputting and tracking of data points into Google Earth Pro (version 7.1.4.1529), affirmation and confirmation of tracking line visualized by satellite with a variety of records at the point, a certain point and tracking input data into ArcMap Application (ArcGIS version 10.1), data processing DEM/SRTM (S04E119) with respect to the location of the grazing areas, creation of a contour map (a distance of 5 m) and mapping tilt (slope) of land and land cover map-making. Analysis of land cover, particularly the state of the vegetation was done through the identification procedure NDVI (Normalized Differences Vegetation Index). This procedure was performed by making use of the Landsat-8. The results showed that the topography of the grazing areas of hills and some sloping surfaces and flat with elevation vary from 74 to 145 above sea level (asl), while the requirements for growing superior grass and legume is an altitude of up to 143-159 asl. Slope varied between 0 - > 40% and was dominated by a slope of 0-15%, according to the slope/topography pasture maximum of 15%. The range of NDVI values for pasture image analysis results was between 0.1 and 0.27. Characteristics of vegetation cover of pasture land in the category of vegetation density were low, 70% of the land was the land for cattle grazing, while the remaining approximately 30% was a grove and forest included plant water where the place for shelter of the cattle during the heat and drinking water supply. There are seven types of graminae and 5 types of legume that was dominant in the region. Proportionally, graminae class dominated up 75.6% and legume crops up to 22.1% and the remaining 2.3% was another plant trees that grow in the region. The dominant weed species in the region were Cromolaenaodorata and Lantana camara, besides that there were 6 types of floor plant that did not include as forage fodder.

Keywords: pastoral, ecology, mapping, beef cattle

Procedia PDF Downloads 353
1572 Critical Assessment of Herbal Medicine Usage and Efficacy by Pharmacy Students

Authors: Anton V. Dolzhenko, Tahir Mehmood Khan

Abstract:

An ability to make an evidence-based decision is a critically important skill required for practicing pharmacists. The development of this skill is incorporated into the pharmacy curriculum. We aimed in our study to estimate perception of pharmacy students regarding herbal medicines and their ability to assess information on herbal medicines professionally. The current Monash University curriculum in Pharmacy does not provide comprehensive study material on herbal medicines and students should find their way to find information, assess its quality and make a professional decision. In the Pharmacy course, students are trained how to apply this process to conventional medicines. In our survey of 93 undergraduate students from year 1-4 of Pharmacy course at Monash University Malaysia, we found that students’ view on herbal medicines is sometimes associated with common beliefs, which affect students’ ability to make evidence-based conclusions regarding the therapeutic potential of herbal medicines. The use of herbal medicines is widespread and 95.7% of the participated students have prior experience of using them. In the scale 1 to 10, students rated the importance of acquiring herbal medicine knowledge for them as 8.1±1.6. More than half (54.9%) agreed that herbal medicines have the same clinical significance as conventional medicines in treating diseases. Even more, students agreed that healthcare settings should give equal importance to both conventional and herbal medicine use (80.6%) and that herbal medicines should comply with strict quality control procedures as conventional medicines (84.9%). The latter statement also indicates that students consider safety issues associated with the use of herbal medicines seriously. It was further confirmed by 94.6% of students saying that the safety and toxicity information on herbs and spices are important to pharmacists and 95.7% of students admitting that drug-herb interactions may affect therapeutic outcome. Only 36.5% of students consider herbal medicines as s safer alternative to conventional medicines. The students use information on herbal medicines from various sources and media. Most of the students (81.7%) obtain information on herbal medicines from the Internet and only 20.4% mentioned lectures/workshop/seminars as a source of such information. Therefore, we can conclude that students attained the skills on the critical assessment of therapeutic properties of conventional medicines have a potential to use their skills for evidence-based decisions regarding herbal medicines.

Keywords: evidence-based decision, pharmacy education, student perception, traditional medicines

Procedia PDF Downloads 282
1571 Computer Modeling and Plant-Wide Dynamic Simulation for Industrial Flare Minimization

Authors: Sujing Wang, Song Wang, Jian Zhang, Qiang Xu

Abstract:

Flaring emissions during abnormal operating conditions such as plant start-ups, shut-downs, and upsets in chemical process industries (CPI) are usually significant. Flare minimization can help to save raw material and energy for CPI plants, and to improve local environmental sustainability. In this paper, a systematic methodology based on plant-wide dynamic simulation is presented for CPI plant flare minimizations under abnormal operating conditions. Since off-specification emission sources are inevitable during abnormal operating conditions, to significantly reduce flaring emission in a CPI plant, they must be either recycled to the upstream process for online reuse, or stored somewhere temporarily for future reprocessing, when the CPI plant manufacturing returns to stable operation. Thus, the off-spec products could be reused instead of being flared. This can be achieved through the identification of viable design and operational strategies during normal and abnormal operations through plant-wide dynamic scheduling, simulation, and optimization. The proposed study includes three stages of simulation works: (i) developing and validating a steady-state model of a CPI plant; (ii) transiting the obtained steady-state plant model to the dynamic modeling environment; and refining and validating the plant dynamic model; and (iii) developing flare minimization strategies for abnormal operating conditions of a CPI plant via a validated plant-wide dynamic model. This cost-effective methodology has two main merits: (i) employing large-scale dynamic modeling and simulations for industrial flare minimization, which involves various unit models for modeling hundreds of CPI plant facilities; (ii) dealing with critical abnormal operating conditions of CPI plants such as plant start-up and shut-down. Two virtual case studies on flare minimizations for start-up operation (over 50% of emission savings) and shut-down operation (over 70% of emission savings) of an ethylene plant have been employed to demonstrate the efficacy of the proposed study.

Keywords: flare minimization, large-scale modeling and simulation, plant shut-down, plant start-up

Procedia PDF Downloads 320
1570 Environmental Performance Measurement for Network-Level Pavement Management

Authors: Jessica Achebe, Susan Tighe

Abstract:

The recent Canadian infrastructure report card reveals the unhealthy state of municipal infrastructure intensified challenged faced by municipalities to maintain adequate infrastructure performance thresholds and meet user’s required service levels. For a road agency, huge funding gap issue is inflated by growing concerns of the environmental repercussion of road construction, operation and maintenance activities. As the reduction of material consumption and greenhouse gas emission when maintain and rehabilitating road networks can achieve added benefits including improved life cycle performance of pavements, reduced climate change impacts and human health effect due to less air pollution, improved productivity due to optimal allocation of resources and reduced road user cost. Incorporating environmental sustainability measure into pavement management is solution widely cited and studied. However measuring the environmental performance of road network is still a far-fetched practice in road network management, more so an ostensive agency-wide environmental sustainability or sustainable maintenance specifications is missing. To address this challenge, this present research focuses on the environmental sustainability performance of network-level pavement management. The ultimate goal is to develop a framework to incorporate environmental sustainability in pavement management systems for network-level maintenance programming. In order to achieve this goal, this study reviewed previous studies that employed environmental performance measures, as well as the suitability of environmental performance indicators for the evaluation of the sustainability of network-level pavement maintenance strategies. Through an industry practice survey, this paper provides a brief forward regarding the pavement manager motivations and barriers to making more sustainable decisions, and data needed to support the network-level environmental sustainability. The trends in network-level sustainable pavement management are also presented, existing gaps are highlighted, and ideas are proposed for sustainable network-level pavement management.

Keywords: pavement management, sustainability, network-level evaluation, environment measures

Procedia PDF Downloads 211
1569 The Effect of Implant Design on the Height of Inter-Implant Bone Crest: A 10-Year Retrospective Study of the Astra Tech Implant and Branemark Implant

Authors: Daeung Jung

Abstract:

Background: In case of patients with missing teeth, multiple implant restoration has been widely used and is inevitable. To increase its survival rate, it is important to understand the influence of different implant designs on inter-implant crestal bone resorption. There are several implant systems designed to minimize loss of crestal bone, and the Astra Tech and Brånemark Implant are two of them. Aim/Hypothesis: The aim of this 10-year study was to compare the height of inter-implant bone crest in two implant systems; the Astra Tech and the Brånemark implant system. Material and Methods: In this retrospective study, 40 consecutively treated patients were utilized; 23 patients with 30 sites for Astra Tech system and 17 patients with 20 sites for Brånemark system. The implant restoration was comprised of splinted crown in partially edentulous patients. Radiographs were taken immediately after 1st surgery, at impression making, at prosthetics setting, and annually after loading. Lateral distance from implant to bone crest, inter-implant distance was gauged, and crestal bone height was measured from the implant shoulder to the first bone contact. Calibrations were performed with known length of thread pitch distance for vertical measurement, and known diameter of abutment or fixture for horizontal measurement using ImageJ. Results: After 10 years, patients treated with Astra Tech implant system demonstrated less inter-implant crestal bone resorption when implants had a distance of 3mm or less between them. In cases of implants that had a greater than 3 mm distance between them, however, there appeared to be no statistically significant difference in crestal bone loss between two systems. Conclusion and clinical implications: In the situation of partially edentulous patients planning to have more than two implants, the inter-implant distance is one of the most important factors to be considered. If it is impossible to make sure of having sufficient inter-implant distance, the implants with less micro gap in the fixture-abutment junction, less traumatic 2nd surgery approach, and the adequate surface topography would be choice of appropriate options to minimize inter-implant crestal bone resorption.

Keywords: implant design, crestal bone loss, inter-implant distance, 10-year retrospective study

Procedia PDF Downloads 166
1568 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 294
1567 Depictions of Human Cannibalism and the Challenge They Pose to the Understanding of Animal Rights

Authors: Desmond F. Bellamy

Abstract:

Discourses about animal rights usually assume an ontological abyss between human and animal. This supposition of non-animality allows us to utilise and exploit non-humans, particularly those with commercial value, with little regard for their rights or interests. We can and do confine them, inflict painful treatments such as castration and branding, and slaughter them at an age determined only by financial considerations. This paper explores the way images and texts depicting human cannibalism reflect this deprivation of rights back onto our species and examines how this offers new perspectives on our granting or withholding of rights to farmed animals. The animals we eat – sheep, pigs, cows, chickens and a small handful of other species – are during processing de-animalised, turned into commodities, and made unrecognisable as formerly living beings. To do the same to a human requires the cannibal to enact another step – humans must first be considered as animals before they can be commodified or de-animalised. Different iterations of cannibalism in a selection of fiction and non-fiction texts will be considered: survivalism (necessitated by catastrophe or dystopian social collapse), the primitive savage of colonial discourses, and the inhuman psychopath. Each type of cannibalism shows alternative ways humans can be animalised and thereby dispossessed of both their human and animal rights. Human rights, summarised in the UN Universal Declaration of Human Rights as ‘life, liberty, and security of person’ are stubbornly denied to many humans, and are refused to virtually all farmed non-humans. How might this paradigm be transformed by seeing the animal victim replaced by an animalised human? People are fascinated as well as repulsed by cannibalism, as demonstrated by the upsurge of films on the subject in the last few decades. Cannibalism is, at its most basic, about envisaging and treating humans as objects: meat. It is on the dinner plate that the abyss between human and ‘animal’ is most challenged. We grasp at a conscious level that we are a species of animal and may become, if in the wrong place (e.g., shark-infested water), ‘just food’. Culturally, however, strong traditions insist that humans are much more than ‘just meat’ and deserve a better fate than torment and death. The billions of animals on death row awaiting human consumption would ask the same if they could. Depictions of cannibalism demonstrate in graphic ways that humans are animals, made of meat and that we can also be butchered and eaten. These depictions of us as having the same fleshiness as non-human animals reminds us that they have the same capacities for pain and pleasure as we do. Depictions of cannibalism, therefore, unconsciously aid in deconstructing the human/animal binary and give a unique glimpse into the often unnoticed repudiation of animal rights.

Keywords: animal rights, cannibalism, human/animal binary, objectification

Procedia PDF Downloads 138
1566 Sizing of Drying Processes to Optimize Conservation of the Nuclear Power Plants on Stationary

Authors: Assabo Mohamed, Bile Mohamed, Ali Farah, Isman Souleiman, Olga Alos Ramos, Marie Cadet

Abstract:

The life of a nuclear power plant is regularly punctuated by short or long period outages to carry out maintenance operations and/or nuclear fuel reloading. During these stops periods, it is essential to conserve all the secondary circuit equipment to avoid corrosion priming. This kind of circuit is one of the main components of a nuclear reactor. Indeed, the conservation materials on shutdown of a nuclear unit improve circuit performance and reduce the maintenance cost considerably. This study is a part of the optimization of the dry preservation of equipment from the water station of the nuclear reactor. The main objective is to provide tools to guide Electricity Production Nuclear Centre (EPNC) in order to achieve the criteria required by the chemical specifications of conservation materials. A theoretical model of drying exchangers of water station is developed by the software Engineering Equation Solver (EES). It used to size requirements and air quality needed for dry conservation of equipment. This model is based on heat transfer and mass transfer governing the drying operation. A parametric study is conducted to know the influence of aerothermal factor taking part in the drying operation. The results show that the success of dry conservation of equipment of the secondary circuit of nuclear reactor depends strongly on the draining, the quality of drying air and the flow of air injecting in the secondary circuit. Finally, theoretical case study performed on EES highlights the importance of mastering the entire system to balance the air system to provide each exchanger optimum flow depending on its characteristics. From these results, recommendations to nuclear power plants can be formulated to optimize drying practices and achieve good performance in the conservation of material from the water at the stop position.

Keywords: dry conservation, optimization, sizing, water station

Procedia PDF Downloads 262
1565 A Review of Critical Framework Assessment Matrices for Data Analysis on Overheating in Buildings Impact

Authors: Martin Adlington, Boris Ceranic, Sally Shazhad

Abstract:

In an effort to reduce carbon emissions, changes in UK regulations, such as Part L Conservation of heat and power, dictates improved thermal insulation and enhanced air tightness. These changes were a direct response to the UK Government being fully committed to achieving its carbon targets under the Climate Change Act 2008. The goal is to reduce emissions by at least 80% by 2050. Factors such as climate change are likely to exacerbate the problem of overheating, as this phenomenon expects to increase the frequency of extreme heat events exemplified by stagnant air masses and successive high minimum overnight temperatures. However, climate change is not the only concern relevant to overheating, as research signifies, location, design, and occupation; construction type and layout can also play a part. Because of this growing problem, research shows the possibility of health effects on occupants of buildings could be an issue. Increases in temperature can perhaps have a direct impact on the human body’s ability to retain thermoregulation and therefore the effects of heat-related illnesses such as heat stroke, heat exhaustion, heat syncope and even death can be imminent. This review paper presents a comprehensive evaluation of the current literature on the causes and health effects of overheating in buildings and has examined the differing applied assessment approaches used to measure the concept. Firstly, an overview of the topic was presented followed by an examination of overheating research work from the last decade. These papers form the body of the article and are grouped into a framework matrix summarizing the source material identifying the differing methods of analysis of overheating. Cross case evaluation has identified systematic relationships between different variables within the matrix. Key areas focused on include, building types and country, occupants behavior, health effects, simulation tools, computational methods.

Keywords: overheating, climate change, thermal comfort, health

Procedia PDF Downloads 351
1564 Application of Neuroscience in Aligning Instructional Design to Student Learning Style

Authors: Jayati Bhattacharjee

Abstract:

Teaching is a very dynamic profession. Teaching Science is as much challenging as Learning the subject if not more. For instance teaching of Chemistry. From the introductory concepts of subatomic particles to atoms of elements and their symbols and further presenting the chemical equation and so forth is a challenge on both side of the equation Teaching Learning. This paper combines the Neuroscience of Learning and memory with the knowledge of Learning style (VAK) and presents an effective tool for the teacher to authenticate Learning. The model of ‘Working Memory’, the Visio-spatial sketchpad, the central executive and the phonological loop that transforms short-term memory to long term memory actually supports the psychological theory of Learning style i.e. Visual –Auditory-Kinesthetic. A closer examination of David Kolbe’s learning model suggests that learning requires abilities that are polar opposites, and that the learner must continually choose which set of learning abilities he or she will use in a specific learning situation. In grasping experience some of us perceive new information through experiencing the concrete, tangible, felt qualities of the world, relying on our senses and immersing ourselves in concrete reality. Others tend to perceive, grasp, or take hold of new information through symbolic representation or abstract conceptualization – thinking about, analyzing, or systematically planning, rather than using sensation as a guide. Similarly, in transforming or processing experience some of us tend to carefully watch others who are involved in the experience and reflect on what happens, while others choose to jump right in and start doing things. The watchers favor reflective observation, while the doers favor active experimentation. Any lesson plan based on the model of Prescriptive design: C+O=M (C: Instructional condition; O: Instructional Outcome; M: Instructional method). The desired outcome and conditions are independent variables whereas the instructional method is dependent hence can be planned and suited to maximize the learning outcome. The assessment for learning rather than of learning can encourage, build confidence and hope amongst the learners and go a long way to replace the anxiety and hopelessness that a student experiences while learning Science with a human touch in it. Application of this model has been tried in teaching chemistry to high school students as well as in workshops with teachers. The response received has proven the desirable results.

Keywords: working memory model, learning style, prescriptive design, assessment for learning

Procedia PDF Downloads 351