Search results for: Ni–base superalloy ŽS6K
1714 Microstructure Analysis of TI-6AL-4V Friction Stir Welded Joints
Authors: P. Leo, E. Cerri, L. Fratini, G. Buffa
Abstract:
The Friction Stir Welding process uses an inert rotating mandrel and a force on the mandrel normal to the plane of the sheets to generate the frictional heat. The heat and the stirring action of the mandrel create a bond between the two sheets without melting the base metal. As matter of fact, the use of a solid state welding process limits the insurgence of defects, due to the presence of gas in melting bath, and avoids the negative effects of materials metallurgical transformation strictly connected with the change of phase. The industrial importance of Ti-6Al-4V alloy is well known. It provides an exceptional good balance of strength, ductility, fatigue and fracture properties together with good corrosion resistance and good metallurgical stability. In this paper, the authors analyze the microstructure of friction stir welded joints of Ti-6Al-4V processed at the same travel speed (35 mm/min) but at different rotation speeds (300-500 rpm). The microstructure of base material (BM), as result from both optical microscope and scanning electron microscope analysis is not homogenous. It is characterized by distorted α/β lamellar microstructure together with smashed zone of fragmented β layer and β retained grain boundary phase. The BM has been welded in the-as received state, without any previous heat treatment. Even the microstructure of the transverse and longitudinal sections of joints is not homogeneous. Close to the top of weld cross sections a much finer microstructure than the initial condition has been observed, while in the center of the joints the microstructure is less refined. Along longitudinal sections, the microstructure is characterized by equiaxed grains and lamellae. Both the length and area fraction of lamellas increases with distance from longitudinal axis. The hardness of joints is higher than that of BM. As the process temperature increases the average microhardness slightly decreases.Keywords: friction stir welding, microhardness, microstructure, Ti-6Al-4V
Procedia PDF Downloads 3811713 Power Energy Management For A Grid-Connected PV System Using Rule-Base Fuzzy Logic
Authors: Nousheen Hashmi, Shoab Ahmad Khan
Abstract:
Active collaboration among the green energy sources and the load demand leads to serious issues related to power quality and stability. The growing number of green energy resources and Distributed-Generators need newer strategies to be incorporated for their operations to keep the power energy stability among green energy resources and micro-grid/Utility Grid. This paper presents a novel technique for energy power management in Grid-Connected Photovoltaic with energy storage system under set of constraints including weather conditions, Load Shedding Hours, Peak pricing Hours by using rule-based fuzzy smart grid controller to schedule power coming from multiple Power sources (photovoltaic, grid, battery) under the above set of constraints. The technique fuzzifies all the inputs and establishes fuzzify rule set from fuzzy outputs before defuzzification. Simulations are run for 24 hours period and rule base power scheduler is developed. The proposed fuzzy controller control strategy is able to sense the continuous fluctuations in Photovoltaic power generation, Load Demands, Grid (load Shedding patterns) and Battery State of Charge in order to make correct and quick decisions.The suggested Fuzzy Rule-based scheduler can operate well with vague inputs thus doesn’t not require any exact numerical model and can handle nonlinearity. This technique provides a framework for the extension to handle multiple special cases for optimized working of the system.Keywords: photovoltaic, power, fuzzy logic, distributed generators, state of charge, load shedding, membership functions
Procedia PDF Downloads 4801712 Design of a Fuzzy Expert System for the Impact of Diabetes Mellitus on Cardiac and Renal Impediments
Authors: E. Rama Devi Jothilingam
Abstract:
Diabetes mellitus is now one of the most common non communicable diseases globally. India leads the world with largest number of diabetic subjects earning the title "diabetes capital of the world". In order to reduce the mortality rate, a fuzzy expert system is designed to predict the severity of cardiac and renal problems of diabetic patients using fuzzy logic. Since uncertainty is inherent in medicine, fuzzy logic is used in this research work to remove the inherent fuzziness of linguistic concepts and uncertain status in diabetes mellitus which is the prime cause for the cardiac arrest and renal failure. In this work, the controllable risk factors "blood sugar, insulin, ketones, lipids, obesity, blood pressure and protein/creatinine ratio" are considered as input parameters and the "the stages of cardiac" (SOC)" and the stages of renal" (SORD) are considered as the output parameters. The triangular membership functions are used to model the input and output parameters. The rule base is constructed for the proposed expert system based on the knowledge from the medical experts. Mamdani inference engine is used to infer the information based on the rule base to take major decision in diagnosis. Mean of maximum is used to get a non fuzzy control action that best represent possibility distribution of an inferred fuzzy control action. The proposed system also classifies the patients with high risk and low risk using fuzzy c means clustering techniques so that the patients with high risk are treated immediately. The system is validated with Matlab and is used as a tracking system with accuracy and robustness.Keywords: Diabetes mellitus, fuzzy expert system, Mamdani, MATLAB
Procedia PDF Downloads 2941711 Quantification of Factors Contributing to Wave-In-Deck on Fixed Jacket Platforms
Authors: C. Y. Ng, A. M. Johan, A. E. Kajuputra
Abstract:
Wave-in-deck phenomenon for fixed jacket platforms at shallow water condition has been reported as a notable risk to the workability and reliability of the platform. Reduction in reservoir pressure, due to the extraction of hydrocarbon for an extended period of time, has caused the occurrence of seabed subsidence. Platform experiencing subsidence promotes reduction of air gaps, which eventually allows the waves to attack the bottom decks. The impact of the wave-in-deck generates additional loads to the structure and therefore increases the values of the moment arms. Higher moment arms trigger instability in terms of overturning, eventually decreases the reserve strength ratio (RSR) values of the structure. The mechanics of wave-in-decks, however, is still not well understood and have not been fully incorporated into the design codes and standards. Hence, it is necessary to revisit the current design codes and standards for platform design optimization. The aim of this study is to evaluate the effects of RSR due to wave-in-deck on four-legged jacket platforms in Malaysia. Base shear values with regards to calibration and modifications of wave characteristics were obtained using SESAM GeniE. Correspondingly, pushover analysis is conducted using USFOS to retrieve the RSR. The effects of the contributing factors i.e. the wave height, wave period and water depth with regards to the RSR and base shear values were analyzed and discussed. This research proposal is important in optimizing the design life of the existing and aging offshore structures. Outcomes of this research are expected to provide a proper evaluation of the wave-in-deck mechanics and in return contribute to the current mitigation strategies in managing the issue.Keywords: wave-in-deck loads, wave effects, water depth, fixed jacket platforms
Procedia PDF Downloads 4271710 Potential Impacts of Warming Climate on Contributions of Runoff Components from Two Catchments of Upper Indus Basin, Karakoram, Pakistan
Authors: Syed Hammad Ali, Rijan Bhakta Kayastha, Ahuti Shrestha, Iram Bano
Abstract:
The hydrology of Upper Indus basin is not recognized well due to the intricacies in the climate and geography, and the scarcity of data above 5000 meters above sea level where most of the precipitation falls in the form of snow. The main objective of this study is to measure the contributions of different components of runoff in Upper Indus basin. To achieve this goal, the Modified positive degree-day model (MPDDM) was used to simulate the runoff and investigate its components in two catchments of Upper Indus basin, Hunza and Gilgit River basins. These two catchments were selected because of their different glacier coverage, contrasting area distribution at high altitudes and significant impact on the Upper Indus River flow. The components of runoff like snow-ice melt and rainfall-base flow were identified by the model. The simulation results show that the MPDDM shows a good agreement between observed and modeled runoff of these two catchments and the effects of snow-ice are mainly reliant on the catchment characteristics and the glaciated area. For Gilgit River basin, the largest contributor to runoff is rain-base flow, whereas large contribution of snow-ice melt observed in Hunza River basin due to its large fraction of glaciated area. This research will not only contribute to the better understanding of the impacts of climate change on the hydrological response in the Upper Indus, but will also provide guidance for the development of hydropower potential, water resources management and offer a possible evaluation of future water quantity and availability in these catchments.Keywords: future discharge projection, positive degree day, regional climate model, water resource management
Procedia PDF Downloads 3491709 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method
Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky
Abstract:
It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finite-elements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.Keywords: residual welding deformations, longitudinal and transverse shortenings of welding joints, method of analytic dependences, finite elements method
Procedia PDF Downloads 4101708 Development and Characterization of Cobalt Metal Loaded ZSM-5 and H-ZSM-5 Catalyst for Fischer -Tropsch Synthesis
Authors: Shashank Bahri, Divyanshu Arya, Rajni Jain, Sreedevi Upadhyayula
Abstract:
Petroleum products can be obtained from syngas catalytic conversion using Fischer Tropsch Reaction. The liquid fuels obtained from FTS are sulphur and nitrogen free and thus may easily meet the increasing stringent environment regulations. In the present work we have synthesized Meso porous ZSM-5 supported catalyst. Meso structure were created in H-ZSM-5 crystallites by demetalation via subsequent base and acid treatment. Desilication through base treatment provides H-ZSM-5 with pore size and volumes similar to amorphous SiO2 (Conventional Carrier). Modifying the zeolite texture and surface chemistry by Desilication and acid washing alters its accessibility and interactions with metal phase and consequently the CO adsorption behavior and hydrocarbon product distribution. Increasing the mesoporosity via desilication provides the micro porous zeolite with essential surface area to support optimally sized metal crystallites. This improves the metal dispersion and hence improve the activity of the catalyst. Transition metal (Co) was loaded using wet impregnation method. Synthesized catalysts were characterized by Infrared Spectroscopy, Powdered X-Ray Diffraction, Scanning Electron Microscopy (SEM), BET Method analytical techniques. Acidity of the catalyst which plays an important role in FTS reaction was measured by DRIFT setup pyridine adsorption instead of NH3 Temperature Programmed Desorption. The major difference is that, Pyridine Adsorption can distinguish between Lewis acidity and Bronsted Acidity, thus giving their relative strengths in the catalyst sample, whereas TPD gives total acidity including Lewis and Bronsted ones.Keywords: mesopourus, fischer tropsch reaction, pyridine adsorrption, drift study
Procedia PDF Downloads 3011707 Blunt Abdominal Trauma Management in Adult Patients: An Investigation on Safety of Discharging Patients with Normal Initial Findings
Authors: Rahimi-Movaghar Vafa, Mansouri Pejman, Chardoli Mojtaba, Rezvani Samina
Abstract:
Introduction: Blunt abdominal trauma is one of the leading causes of morbidity and mortality in all age groups, but diagnosis of serious intra-abdominal pathology is difficult and most of the damages are obscure in the initial investigation. There is still controversy about which patients should undergo abdomen/pelvis CT, which patients needs more observation and which patients can be discharged safely The aim of this study was to determine that is it safe to discharge patients with blunt abdominal trauma with normal initial findings. Methods: This non-randomized cross-sectional study was conducted from September 2013 to September 2014 at two levels I trauma centers, Sina hospital and Rasoul-e-Akram hospital (Tehran, Iran). Our inclusion criteria were all patients were admitted for suspicious BAT and our exclusion criteria were patients that have serious head and neck, chest, spine and limb injuries which need surgical intervention, those who have unstable vital signs, pregnant women with a gestational age over 3 months and homeless or without exact home address. 390 patients with blunt trauma abdomen examined and the necessary data, including demographic data, the abdominal examination, FAST result, patients’ lab test results (hematocrit, base deficit, urine analysis) on admission and at 6 and 12 hours after admission were recorded. Patients with normal physical examination, laboratory tests and FAST were discharged from the ED during 12 hours with the explanation of the alarm signs and were followed up after 24 hours and 1 week by a telephone call. Patients with abnormal findings in physical examination, laboratory tests, and FAST underwent abdomino-pelvic CT scan. Results: The study included 390 patients with blunt abdominal trauma between 12 and 80 years of age (mean age, 37.0 ± 13.7 years) and the mean duration of hospitalization in patients was 7.4 ± 4.1 hours. 88.6% of the patients were discharged from hospital before 12 hours. Odds ratio (OR) for having any symptoms for discharge after 6 hours was 0.160 and after 12 hours was 0.117 hours, which is statistically significant. Among the variables age, systolic and diastolic blood pressure, heart rate, respiratory rate, hematocrit and base deficit at admission, 6 hours and 12 hours after admission showed no significant statistical relationship with discharge time. From our 390 patients, 190 patients have normal initial physical examination, lab data and FAST findings that didn’t show any signs or symptoms in their next assessment and in their follow up by the phone call. Conclusion: It is recommended that patients with no symptoms at admission (completely normal physical examination, ultrasound, normal hematocrit and normal base deficit and lack of microscopic hematuria) and good family and social status can be safely discharged from the emergency department.Keywords: blunt abdominal trauma, patient discharge, emergency department, FAST
Procedia PDF Downloads 3661706 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback
Authors: Takuro Kida, Yuichi Kida
Abstract:
We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization
Procedia PDF Downloads 1581705 A Public Health Perspective on Deradicalisation: Re-Conceptualising Deradicalisation Approaches
Authors: Erin Lawlor
Abstract:
In 2008 Time magazine named terrorist rehabilitation as one of the best ideas of the year. The term deradicalisation has become synonymous with rehabilitation within security discourse. The allure for a “quick fix” when managing terrorist populations (particularly within prisons) has led to a focus on prescriptive programmes where there is a distinct lack of exploration into the drivers for a person to disengage or deradicalise from violence. It has been argued that to tackle a snowballing issue that interventions have moved too quickly for both theory development and methodological structure. This overly quick acceptance of a term that lacks rigorous testing, measuring, and monitoring means that there is distinct lack of evidence base for deradicalisation being a genuine process/phenomenon, leading to academics retrospectively attempting to design frameworks and interventions around a concept that is not truly understood. The UK Home Office has openly acknowledged the lack of empirical data on this subject. This lack of evidence has a direct impact on policy and intervention development. Extremism and deradicalisation are issues that affect public health outcomes on a global scale, to the point that terrorism has now been added to the list of causes of trauma, both in the direct form of being victim of an attack but also the indirect context of witnesses, children and ordinary citizens who live in daily fear. This study critiques current deradicalisation discourses to establish whether public health approaches offer opportunities for development. The research begins by exploring the theoretical constructs of both what deradicalisation, and public health issues are. Questioning: What does deradicalisation involve? Is there an evidential base on which deradicalisation theory has established itself? What theory are public health interventions devised from? What does success look like in both fields? From establishing this base, current deradicalisation practices will then be explored through examples of work already being carried out. Critiques can be broken into discussion points of: Language, the difficulties with conducting empirical studies and the issues around outcome measurements that deradicalisation interventions face. This study argues that a public health approach towards deradicalisation offers the opportunity to attempt to bring clarity to the definitions of radicalisation, identify what could be modified through intervention and offer insights into the evaluation of interventions. As opposed to simply focusing on an element of deradicalisation and analysing that in isolation, a public health approach allows for what the literature has pointed out is missing, a comprehensive analysis of current interventions and information on creating efficacy monitoring systems. Interventions, policies, guidance, and practices in both the UK and Australia will be compared and contrasted, due to the joint nature of this research between Sheffield Hallam University and La Trobe, Melbourne.Keywords: radicalisation, deradicalisation, violent extremism, public health
Procedia PDF Downloads 671704 Rare Differential Diagnostic Dilemma
Authors: Angelis P. Barlampas
Abstract:
Theoretical background Disorders of fixation and rotation of the large intestine, result in the existence of its parts in ectopic anatomical positions. In case of symptomatology, the clinical picture is complicated by the possible symptomatology of the neighboring anatomical structures and a differential diagnostic problem arises. Target The purpose of this work is to demonstrate the difficulty of revealing the real cause of abdominal pain, in cases of anatomical variants and the decisive contribution of imaging and especially that of computed tomography. Methods A patient came to the emergency room, because of acute pain in the right hypochondrium. Clinical examination revealed tenderness in the gallbladder area and a positive Murphy's sign. An ultrasound exam depicted a normal gallbladder and the patient was referred for a CT scan. Results Flexible, unfixed ascending colon and cecum, located in the anatomical region of the right mesentery. Opacities of the surrounding peritoneal fat and a small linear concentration of fluid can be seen. There was an appendix of normal anteroposterior diameter with the presence of air in its lumen and without clear signs of inflammation. There was an impression of possible inflammatory swelling at the base of the appendix, (DD phenomenon of partial volume; e.t.c.). Linear opacities of the peritoneal fat in the region of the second loop of the duodenum. Multiple diverticula throughout the colon. Differential Diagnosis The differential diagnosis includes the following: Inflammation of the base of the appendix, diverticulitis of the cecum-ascending colon, a rare case of second duodenal loop ulcer, tuberculosis, terminal ileitis, pancreatitis, torsion of unfixed cecum-ascending colon, embolism or thrombosis of a vascular intestinal branch. Final Diagnosis There is an unfixed cecum-ascending colon, which is exhibiting diverticulitis.Keywords: unfixed cecum-ascending colon, abdominal pain, malrotation, abdominal CT, congenital anomalies
Procedia PDF Downloads 571703 Surface Plasmon Resonance Imaging-Based Epigenetic Assay for Blood DNA Post-Traumatic Stress Disorder Biomarkers
Authors: Judy M. Obliosca, Olivia Vest, Sandra Poulos, Kelsi Smith, Tammy Ferguson, Abigail Powers Lott, Alicia K. Smith, Yang Xu, Christopher K. Tison
Abstract:
Post-Traumatic Stress Disorder (PTSD) is a mental health problem that people may develop after experiencing traumatic events such as combat, natural disasters, and major emotional challenges. Tragically, the number of military personnel with PTSD correlates directly with the number of veterans who attempt suicide, with the highest rate in the Army. Research has shown epigenetic risks in those who are prone to several psychiatric dysfunctions, particularly PTSD. Once initiated in response to trauma, epigenetic alterations in particular, the DNA methylation in the form of 5-methylcytosine (5mC) alters chromatin structure and represses gene expression. Current methods to detect DNA methylation, such as bisulfite-based genomic sequencing techniques, are laborious and have massive analysis workflow while still having high error rates. A faster and simpler detection method of high sensitivity and precision would be useful in a clinical setting to confirm potential PTSD etiologies, prevent other psychiatric disorders, and improve military health. A nano-enhanced Surface Plasmon Resonance imaging (SPRi)-based assay that simultaneously detects site-specific 5mC base (termed as PTSD base) in methylated genes related to PTSD is being developed. The arrays on a sensing chip were first constructed for parallel detection of PTSD bases using synthetic and genomic DNA (gDNA) samples. For the gDNA sample extracted from the whole blood of a PTSD patient, the sample was first digested using specific restriction enzymes, and fragments were denatured to obtain single-stranded methylated target genes (ssDNA). The resulting mixture of ssDNA was then injected into the assay platform, where targets were captured by specific DNA aptamer probes previously immobilized on the surface of a sensing chip. The PTSD bases in targets were detected by anti-5-methylcytosine antibody (anti-5mC), and the resulting signals were then enhanced by the universal nanoenhancer. Preliminary results showed successful detection of a PTSD base in a gDNA sample. Brighter spot images and higher delta values (control-subtracted reflectivity signal) relative to those of the control were observed. We also implemented the in-house surface activation system for detection and developed SPRi disposable chips. Multiplexed PTSD base detection of target methylated genes in blood DNA from PTSD patients of severity conditions (asymptomatic and severe) was conducted. This diagnostic capability being developed is a platform technology, and upon successful implementation for PTSD, it could be reconfigured for the study of a wide variety of neurological disorders such as traumatic brain injury, Alzheimer’s disease, schizophrenia, and Huntington's disease and can be extended to the analyses of other sample matrices such as urine and saliva.Keywords: epigenetic assay, DNA methylation, PTSD, whole blood, multiplexing
Procedia PDF Downloads 1281702 Strategic Development of Urban Environmental Management Base on Good Governance - Case study of (Waste Management of Tehran)
Authors: A. Farhad Sadri, B. Ali Farhadi, C. Nasim Shalamzari
Abstract:
Waste management is a principle of urban and environmental governance. Waste management in Tehran metropolitan requires good strategies for better governance. Using of good urban governance principles together with eight main indexes can be an appropriate base for this aim. One of the reasonable tools in this field is usage of SWOT methods which provides possibility of comparing the opportunities, threats, weaknesses, and strengths by using IFE and EFE matrixes. The results of the above matrixes, respectively 2.533 and 2.403, show that management system of Tehran metropolitan wastes has performed weak regarding to internal factors and has not have good performance regarding using the opportunities and dealing with threats. In this research, prioritizing and describing the real value of each 24 strategies in waste management in Tehran metropolitan have been surveyed considering good governance derived from Quantitative Strategic Planning Management (QSPM) by using Kolomogrof-Smirnoff by 1.549 and significance level of 0.073 in order to define normalization of final values and all of the strategies utilities and Variance Analysis of ANOVA has been calculated for all SWOT strategies. Duncan’s test results regarding four WT, ST, WO, and SO strategies show no significant difference. In addition to mean comparison by Duncan method in this research, LSD (Lowest Significant Difference test) has been used by probability of 5% and finally, 7 strategies and final model of Tehran metropolitan waste management strategy have been defined. Increasing the confidence of people with transparency of budget, developing and improving the legal structure (rule-oriented and law governance, more responsibility about requirements of private sectors, increasing recycling rates and real effective participation of people and NGOs to improve waste management (contribution) and etc, are main available strategies which have been achieved based on good urban governance management principles.Keywords: waste, strategy, environmental management, urban good governance, SWOT
Procedia PDF Downloads 3231701 Potentiometric Determination of Moxifloxacin in Some Pharmaceutical Formulation Using PVC Membrane Sensors
Authors: M. M. Hefnawy, A. M. A. Homoda, M. A. Abounassif, A. M. Alanazia, A. Al-Majed, Gamal A. E. Mostafa
Abstract:
PVC membrane sensors using different approach e.g. ion-pair, ionophore, and Schiff-base has been used as testing membrane sensor. Analytical applications of membrane sensors for direct measurement of variety of different ions in complex biological and environmental sample are reported. The most important step of such PVC membrane sensor is the sensing active material. The potentiometric sensors have some outstanding advantages including simple design, operation, wide linear dynamic range, relative fast response time, and rotational selectivity. The analytical applications of these techniques to pharmaceutical compounds in dosage forms are also discussed. The construction and electrochemical response characteristics of Poly (vinyl chloride) membrane sensors for moxifloxacin HCl (MOX) are described. The sensing membranes incorporate ion association complexes of moxifloxacin cation and sodium tetraphenyl borate (NaTPB) (sensor 1), phosphomolybdic acid (PMA) (sensor 2) or phosphotungstic acid (PTA) (sensor 3) as electroactive materials. The sensors display a fast, stable and near-Nernstian response over a relative wide moxifloxacin concentration range (1 ×10-2-4.0×10-6, 1 × 10-2-5.0×10-6, 1 × 10-2-5.0×10-6 M), with detection limits of 3×10-6, 4×10-6 and 4.0×10-6 M for sensor 1, 2 and 3, respectively over a pH range of 6.0-9.0. The sensors show good discrimination of moxifloxacin from several inorganic and organic compounds. The direct determination of 400 µg/ml of moxifloxacin show an average recovery of 98.5, 99.1 and 98.6 % and a mean relative standard deviation of 1.8, 1.6 and 1.8% for sensors 1, 2, and 3 respectively. The proposed sensors have been applied for direct determination of moxifloxacin in some pharmaceutical preparations. The results obtained by determination of moxifloxacin in tablets using the proposed sensors are comparable favorably with those obtained using the US Pharmacopeia method. The sensors have been used as indicator electrodes for potentiometric titration of moxifloxacin.Keywords: potentiometry, PVC, membrane sensors, ion-pair, ionophore, schiff-base, moxifloxacin HCl, sodium tetraphenyl borate, phosphomolybdic acid, phosphotungstic acid
Procedia PDF Downloads 4411700 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm
Authors: Frodouard Minani
Abstract:
Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks
Procedia PDF Downloads 1461699 Gender and Political Participation in Africa
Authors: Ibrahim Baba
Abstract:
The work examines the nature and causes of differential politics in Africa with particular reference to the sub-Saharan region of the continent. It also among other objectives provides alternative panacea to gender discrimination in African politics and offers solutions on how to promote political inclusion of all citizens in respect of gender differences in Africa. The work is conducted using library base documentation analysis.Keywords: gender, political, participation, differential politics, sub-Saharan Africa
Procedia PDF Downloads 4261698 Use of FWD in Determination of Bonding Condition of Semi-Rigid Asphalt Pavement
Authors: Nonde Lushinga, Jiang Xin, Danstan Chiponde, Lawrence P. Mutale
Abstract:
In this paper, falling weight deflectometer (FWD) was used to determine the bonding condition of a newly constructed semi-rigid base pavement. Using Evercal back-calculation computer programme, it was possible to quickly and accurately determine the structural condition of the pavement system of FWD test data. The bonding condition of the pavement layers was determined from calculated shear stresses and strains (relative horizontal displacements) on the interface of pavement layers from BISAR 3.0 pavement computer programmes. Thus, by using non-linear layered elastic theory, a pavement structure is analysed in the same way as other civil engineering structures. From non-destructive FWD testing, the required bonding condition of pavement layers was quantified from soundly based principles of Goodman’s constitutive models shown in equation 2, thereby producing the shear reaction modulus (Ks) which gives an indication of bonding state of pavement layers. Furthermore, a Tack coat failure Ratio (TFR) which has long being used in the USA in pavement evaluation was also used in the study in order to give validity to the study. According to research [39], the interface between two asphalt layers is determined by use of Tack Coat failure Ratio (TFR) which is the ratio of the stiffness of top layer asphalt layers over the stiffness of the second asphalt layer (E1/E2) in a slipped pavement. TFR gives an indication of the strength of the tack coat which is the main determinants of interlayer slipping. The criteria is that if the interface was in the state full bond, TFR would be greater or equals to 1 and that if the TFR was 0, meant full slip. Results of the calculations showed that TFR value was 1.81 which re-affirmed the position that the pavement under study was in the state of full bond because the value was greater than 1. It was concluded that FWD can be used to determine bonding condition of existing and newly constructed pavements.Keywords: falling weight deflectometer (FWD), backcaluclation, semi-rigid base pavement, shear reaction modulus
Procedia PDF Downloads 5181697 Effect of Deer Antler Extract on Osteogenic Gene Expression and Longitudinal Bone Growth of Adolescent Male Rats
Authors: Kang-Hyun Leem, Myung-Gyou Kim, Hye Kyung Kim
Abstract:
Deer antler, traditionally used as a tonic and valuable drug in oriental medicine, has been considered to possess bone-strengthening activity. The upper section, mid section, and base of the antler has been known to exhibit different biological properties. Present study was performed to examine the effects of different parts of deer antler extract (DH) on osteogenic gene expressions in MG-63 cells and longitudinal bone growth in adolescent male rats. The expressions of osteogenic genes, collagen, alkaline phosphatase, osteocalcin, and osteopontin, were measured by quantitative real-time PCR. Longitudinal bone growth was measured in 3-week-old male Sprague-Dawley rats using fluorescence microscopy. To examine the effects on the growth plate metabolism, the total height of growth plate and bone morphogenetic protein-2 (BMP-2) were measured. Collagen and osteocalcin mRNA expressions were increased by all three parts of the DH treatment while osteopontin gene expression was not affected by any of the DH treatment. Alkaline phosphatase gene expression was increased by upper and mid part of DH while base part of DH fails to affect alkaline phosphatase gene expression. The upper and mid parts of the DH treatment enhanced longitudinal bone growth and total height of growth plate. The induction of BMP-2 protein expression in growth plate assessed by immunostaining was also promoted by upper and mid parts of the DH treatment. These results suggest that DH, especially upper and mid parts, stimulate osteogenic gene expressions and have the effect on bone growth in adolescent rats and might be used for the growth delayed adolescent and inherent growth failure patient.Keywords: bone morphogenetic protein-2, deer antler, longitudinal bone growth, osteogenic genes
Procedia PDF Downloads 3791696 Applying Cognitive Psychology to Education: Translational Educational Science
Authors: Hammache Nadir
Abstract:
The scientific study of human learning and memory is now more than 125 years old. Psychologists have conducted thousands of experiments, correlational analyses, and field studies during this time, in addition to other research conducted by those from neighboring fields. A huge knowledge base has been carefully built up over the decades. Given this backdrop, we may ask ourselves: What great changes in education have resulted from this huge research base? How has the scientific study of learning and memory changed practices in education from those of, say, a century ago? Have we succeeded in building a translational educational science to rival medical science (in which biological knowledge is translated into medical practice) or types of engineering (in which, e.g., basic knowledge in chemistry is translated into products through chemical engineering)? The answer, I am afraid, is rather mixed. Psychologists and psychological research have influenced educational practice, but in fits and starts. After all, some of the great founders of American psychology—William James, Edward L. Thorndike, John Dewey, and others—are also revered as important figures in the history of education. And some psychological research and ideas have made their way into education—for instance, computer-based cognitive tutors for some specific topics have been developed in recent years—and in years past, such practices as teaching machines, programmed learning, and, in higher education, the Keller Plan were all important. These older practices have not been sustained. Was that because they failed or because of a lack of systematic research showing they were effective? At any rate, in 2012, we cannot point to a well-developed translational educational science in which research about learning and memory, thinking and reasoning, and related topics is moved from the lab into controlled field trials (like clinical trials in medicine) and the tested techniques, if they succeed, are introduced into broad educational practice. We are just not there yet, and one question that arises is how we could achieve a translational educational science.Keywords: affective, education, cognition, pshychology
Procedia PDF Downloads 3461695 Occupational Attainment of Second Generation of Ethnic Minority Immigrants in the UK
Authors: Rukhsana Kausar, Issam Malki
Abstract:
The integration and assimilation of ethnic minority immigrants (EMIs) and their subsequent generations remains a serious unsettled issue in most of the host countries. This study conducts the labour market gender analysis to investigate specifically whether second generation of ethnic minority immigrants in the UK is gaining access to professional and managerial employment and advantaged occupational positions on par with their native counterparts. The data used to examine the labour market achievements of EMIs is taken from Labour Force Survey (LFS) for the period 2014-2018. We apply a multivalued treatment under ignorability as proposed by Cattaneo (2010), which refers to treatment effects under the assumptions of (i) selection – on – observables and (ii) common support. We report estimates of Average Treatment Effect (ATE), Average Treatment Effect on the Treated (ATET), and Potential Outcomes Means (POM) using three estimators, including the Regression Adjustment (RA), Augmented Inverse Probability Weighting (AIPW) and Inverse Probability Weighting- Regression Adjustment (IPWRA). We consider two cases: the case with four categories where the first-generation natives are the base category, the second case combine all natives as a base group. Our findings suggest the following. Under Case 1, the estimated probabilities and differences across groups are consistently similar and highly significant. As expected, first generation natives have the highest probability for higher career attainment among both men and women. The findings also suggest that first generation immigrants perform better than the remaining two groups, including the second-generation natives and immigrants. Furthermore, second generation immigrants have higher probability to attain higher professional career, while this is lower for a managerial career. Similar conclusions are reached under Case 2. That is to say that both first – generation and second – generation immigrants have a lower probability for higher career and managerial attainment. First – generation immigrants are found to perform better than second – generation immigrants.Keywords: immigrnats, second generation, occupational attainment, ethnicity
Procedia PDF Downloads 1081694 A Reflective Investigation on the Course Design and Coaching Strategy for Creating a Trans-Disciplinary Leaning Environment
Authors: Min-Feng Hsieh
Abstract:
Nowadays, we are facing a highly competitive environment in which the situation for survival has come even more critical than ever before. The challenge we will be confronted with is no longer can be dealt with the single system of knowledge. The abilities we urgently need to acquire is something that can lead us to cross over the boundaries between different disciplines and take us to a neutral ground that gathers and integrates powers and intelligence that surrounds us. This paper aims at discussing how a trans-disciplinary design course organized by the College of Design at Chaoyang University can react to this modern challenge. By orchestrating an experimental course format and by developing a series of coaching strategies, a trans-disciplinary learning environment has been created and practiced in which students selected from five different departments, including Architecture, Interior Design, Visual Design, Industrial Design, Landscape and Urban Design, are encouraged to think outside their familiar knowledge pool and to learn with/from each other. In the course of implementing this program, a parallel research has been conducted alongside by adopting the theory and principles of Action Research which is a research methodology that can provide the course organizer emergent, responsive, action-oriented, participative and critically reflective insights for the immediate changes and amendments in order to improve the effect of teaching and learning experience. In the conclusion, how the learning and teaching experience of this trans-disciplinary design studio can offer us some observation that can help us reflect upon the constraints and division caused by the subject base curriculum will be pointed out. A series of concepts for course design and teaching strategies developed and implemented in this trans-disciplinary course are to be introduced as a way to promote learners’ self-motivated, collaborative, cross-disciplinary and student-centered learning skills. The outcome of this experimental course can exemplify an alternative approach that we could adopt in pursuing a remedy for dealing with the problematic issues of the current educational practice.Keywords: course design, coaching strategy, subject base curriculum, trans-disciplinary
Procedia PDF Downloads 2051693 Optimal Perturbation in an Impulsively Blocked Channel Flow
Authors: Avinash Nayak, Debopam Das
Abstract:
The current work implements the variational principle to find the optimum initial perturbation that provides maximum growth in an impulsively blocked channel flow. The conventional method for studying temporal stability has always been through modal analysis. In most of the transient flows, this modal analysis is still followed with the quasi-steady assumption, i.e. change in base flow is much slower compared to perturbation growth rate. There are other studies where transient analysis on time dependent flows is done by formulating the growth of perturbation as an initial value problem. But the perturbation growth is sensitive to the initial condition. This study intends to find the initial perturbation that provides the maximum growth at a later time. Here, the expression of base flow for blocked channel is derived and the formulation is based on the two dimensional perturbation with stream function representing the perturbation quantity. Hence, the governing equation becomes the Orr-Sommerfeld equation. In the current context, the cost functional is defined as the ratio of disturbance energy at a terminal time 'T' to the initial energy, i.e. G(T) = ||q(T)||2/||q(0)||2 where q is the perturbation and ||.|| defines the norm chosen. The above cost functional needs to be maximized against the initial perturbation distribution. It is achieved with the constraint that perturbation follows the basic governing equation, i.e. Orr-Sommerfeld equation. The corresponding adjoint equation is derived and is solved along with the basic governing equation in an iterative manner to provide the initial spatial shape of the perturbation that provides the maximum growth G (T). The growth rate is plotted against time showing the development of perturbation which achieves an asymptotic shape. The effects of various parameters, e.g. Reynolds number, are studied in the process. Thus, the study emphasizes on the usage of optimal perturbation and its growth to understand the stability characteristics of time dependent flows. The assumption of quasi-steady analysis can be verified against these results for the transient flows like impulsive blocked channel flow.Keywords: blocked channel flow, calculus of variation, hydrodynamic stability, optimal perturbation
Procedia PDF Downloads 4211692 3D Numerical Simulation of Undoweled and Uncracked Joints in Short Paneled Concrete Pavements
Authors: K. Sridhar Reddy, M. Amaranatha Reddy, Nilanjan Mitra
Abstract:
Short paneled concrete pavement (SPCP) with shorter panel size can be an alternative to the conventional jointed plain concrete pavements (JPCP) at the same cost as the asphalt pavements with all the advantages of concrete pavement with reduced thickness, less chance of mid-slab cracking and or dowel bar locking so common in JPCP. Cast-in-situ short concrete panels (short slabs) laid on a strong foundation consisting of a dry lean concrete base (DLC), and cement treated subbase (CTSB) will reduce the thickness of the concrete slab to the order of 180 mm to 220 mm, whereas JPCP was with 280 mm for the same traffic. During the construction of SPCP test sections on two Indian National Highways (NH), it was observed that the joints remain uncracked after a year of traffic. The undoweled and uncracked joints load transfer variability and joint behavior are of interest with anticipation on its long-term performance of the SPCP. To investigate the effects of undoweled and uncracked joints on short slabs, the present study was conducted. A multilayer linear elastic analysis using 3D finite element package for different panel sizes with different thicknesses resting on different types of solid elastic foundation with and without temperature gradient was developed. Surface deflections were obtained from 3D FE model and validated with measured field deflections from falling weight deflectometer (FWD) test. Stress analysis indicates that flexural stresses in short slabs are decreased with a decrease in panel size and increase in thickness. Detailed evaluation of stress analysis with the effects of curling behavior, the stiffness of the base layer and a variable degree of load transfer, is underway.Keywords: joint behavior, short slabs, uncracked joints, undoweled joints, 3D numerical simulation
Procedia PDF Downloads 1821691 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 2321690 Cooperative Robot Application in a Never Explored or an Abandoned Sub-Surface Mine
Authors: Michael K. O. Ayomoh, Oyindamola A. Omotuyi
Abstract:
Autonomous mobile robots deployed to explore or operate in a never explored or an abandoned sub-surface mine requires extreme effectiveness in coordination and communication. In a bid to transmit information from the depth of the mine to the external surface in real-time and amidst diverse physical, chemical and virtual impediments, the concept of unified cooperative robots is seen to be a proficient approach. This paper presents an effective [human → robot → task] coordination framework for effective exploration of an abandoned underground mine. The problem addressed in this research is basically the development of a globalized optimization model premised on time series differentiation and geometrical configurations for effective positioning of the two classes of robots in the cooperation namely the outermost stationary master (OSM) robots and the innermost dynamic task (IDT) robots for effective bi-directional signal transmission. In addition, the synchronization of a vision system and wireless communication system for both categories of robots, fiber optics system for the OSM robots in cases of highly sloppy or vertical mine channels and an autonomous battery recharging capability for the IDT robots further enhanced the proposed concept. The OSM robots are the master robots which are positioned at strategic locations starting from the mine open surface down to its base using a fiber-optic cable or a wireless communication medium all subject to the identified mine geometrical configuration. The OSM robots are usually stationary and function by coordinating the transmission of signals from the IDT robots at the base of the mine to the surface and in a reverse order based on human decisions at the surface control station. The proposed scheme also presents an optimized number of robots required to form the cooperation in a bid to reduce overall operational cost and system complexity.Keywords: sub-surface mine, wireless communication, outermost stationary master robots, inner-most dynamic robots, fiber optic
Procedia PDF Downloads 2131689 Traumatic Chiasmal Syndrome Following Traumatic Brain Injury
Authors: Jiping Cai, Ningzhi Wangyang, Jun Shao
Abstract:
Traumatic brain injury (TBI) is one of the major causes of morbidity and mortality that leads to structural and functional damage in several parts of the brain, such as cranial nerves, optic nerve tract or other circuitry involved in vision and occipital lobe, depending on its location and severity. As a result, the function associated with vision processing and perception are significantly affected and cause blurred vision, double vision, decreased peripheral vision and blindness. Here two cases complaining of monocular vision loss (actually temporal hemianopia) due to traumatic chiasmal syndrome after frontal head injury were reported, and were compared the findings with individual case reports published in the literature. Reported cases of traumatic chiasmal syndrome appear to share some common features, such as injury to the frontal bone and fracture of the anterior skull base. The degree of bitemporal hemianopia and visual loss acuity have a variable presentation and was not necessarily related to the severity of the craniocerebral trauma. Chiasmal injury may occur even in the absence bony chip impingement. Isolated bitemporal hemianopia is rare and clinical improvement usually may not occur. Mechanisms of damage to the optic chiasm after trauma include direct tearing, contusion haemorrhage and contusion necrosis, and secondary mechanisms such as cell death, inflammation, edema, neurogenesis impairment and axonal damage associated with TBI. Beside visual field test, MRI evaluation of optic pathways seems to the strong objective evidence to demonstrate the impairment of the integrity of visual systems following TBI. Therefore, traumatic chiasmal syndrome should be considered as a differential diagnosis by both neurosurgeons and ophthalmologists in patients presenting with visual impairment, especially bitemporal hemianopia after head injury causing frontal and anterior skull base fracture.Keywords: bitemporal hemianopia, brain injury, optic chiasma, traumatic chiasmal syndrome.
Procedia PDF Downloads 791688 Cluster Analysis of Students’ Learning Satisfaction
Authors: Purevdolgor Luvsantseren, Ajnai Luvsan-Ish, Oyuntsetseg Sandag, Javzmaa Tsend, Akhit Tileubai, Baasandorj Chilhaasuren, Jargalbat Puntsagdash, Galbadrakh Chuluunbaatar
Abstract:
One of the indicators of the quality of university services is student satisfaction. Aim: We aimed to study the level of satisfaction of students in the first year of premedical courses in the course of Medical Physics using the cluster method. Materials and Methods: In the framework of this goal, a questionnaire was collected from a total of 324 students who studied the medical physics course of the 1st course of the premedical course at the Mongolian National University of Medical Sciences. When determining the level of satisfaction, the answers were obtained on five levels of satisfaction: "excellent", "good", "medium", "bad" and "very bad". A total of 39 questionnaires were collected from students: 8 for course evaluation, 19 for teacher evaluation, and 12 for student evaluation. From the research, a database with 39 fields and 324 records was created. Results: In this database, cluster analysis was performed in MATLAB and R programs using the k-means method of data mining. Calculated the Hopkins statistic in the created database, the values are 0.88, 0.87, and 0.97. This shows that cluster analysis methods can be used. The course evaluation sub-fund is divided into three clusters. Among them, cluster I has 150 objects with a "good" rating of 46.2%, cluster II has 119 objects with a "medium" rating of 36.7%, and Cluster III has 54 objects with a "good" rating of 16.6%. The teacher evaluation sub-base into three clusters, there are 179 objects with a "good" rating of 55.2% in cluster II, 108 objects with an "average" rating of 33.3% in cluster III, and 36 objects with an "excellent" rating in cluster I of 11.1%. The sub-base of student evaluations is divided into two clusters: cluster II has 215 objects with an "excellent" rating of 66.3%, and cluster I has 108 objects with an "excellent" rating of 33.3%. Evaluating the resulting clusters with the Silhouette coefficient, 0.32 for the course evaluation cluster, 0.31 for the teacher evaluation cluster, and 0.30 for student evaluation show statistical significance. Conclusion: Finally, to conclude, cluster analysis in the model of the medical physics lesson “good” - 46.2%, “middle” - 36.7%, “bad” - 16.6%; 55.2% - “good”, 33.3% - “middle”, 11.1% - “bad” in the teacher evaluation model; 66.3% - “good” and 33.3% of “bad” in the student evaluation model.Keywords: questionnaire, data mining, k-means method, silhouette coefficient
Procedia PDF Downloads 511687 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 711686 Consumer Preferences when Buying Second Hand Luxury Items
Authors: K. A. Schuck, J. K. Perret, A. Mehn, K. Rommel
Abstract:
Consumers increasingly consider sustainability aspects in their consumption behavior. Although, few fashion brands are already active in the second-hand luxury market with their own online platforms. Separating between base and high-end luxury brands, two online discrete choice experiments determine the drivers behind consumers’ willingness-to-pay for platform characteristics like the type of ownership, giving brands the opportunity to elicit a financial scope they can operate within.Keywords: choice experiment, luxury, preferences, second-hand, platform, online
Procedia PDF Downloads 1281685 Reconceptualizing Evidence and Evidence Types for Digital Journalism Studies
Authors: Hai L. Tran
Abstract:
In the digital age, evidence-based reporting is touted as a best practice for seeking the truth and keeping the public well-informed. Journalists are expected to rely on evidence to demonstrate the validity of a factual statement and lend credence to an individual account. Evidence can be obtained from various sources, and due to a rich supply of evidence types available, the definition of this important concept varies semantically. To promote clarity and understanding, it is necessary to break down the various types of evidence and categorize them in a more coherent, systematic way. There is a wide array of devices that digital journalists deploy as proof to back up or refute a truth claim. Evidence can take various formats, including verbal and visual materials. Verbal evidence encompasses quotes, soundbites, talking heads, testimonies, voice recordings, anecdotes, and statistics communicated through written or spoken language. There are instances where evidence is simply non-verbal, such as when natural sounds are provided without any verbalized words. On the other hand, other language-free items exhibited in photos, video footage, data visualizations, infographics, and illustrations can serve as visual evidence. Moreover, there are different sources from which evidence can be cited. Supporting materials, such as public or leaked records and documents, data, research studies, surveys, polls, or reports compiled by governments, organizations, and other entities, are frequently included as informational evidence. Proof can also come from human sources via interviews, recorded conversations, public and private gatherings, or press conferences. Expert opinions, eye-witness insights, insider observations, and official statements are some of the common examples of testimonial evidence. Digital journalism studies tend to make broad references when comparing qualitative versus quantitative forms of evidence. Meanwhile, limited efforts are being undertaken to distinguish between sister terms, such as “data,” “statistical,” and “base-rate” on one side of the spectrum and “narrative,” “anecdotal,” and “exemplar” on the other. The present study seeks to develop the evidence taxonomy, which classifies evidence through the quantitative-qualitative juxtaposition and in a hierarchical order from broad to specific. According to this scheme, data, statistics, and base rate belong to the quantitative evidence group, whereas narrative, anecdote, and exemplar fall into the qualitative evidence group. Subsequently, the taxonomical classification arranges data versus narrative at the top of the hierarchy of types of evidence, followed by statistics versus anecdote and base rate versus exemplar. This research reiterates the central role of evidence in how journalists describe and explain social phenomena and issues. By defining the various types of evidence and delineating their logical connections it helps remove a significant degree of conceptual inconsistency, ambiguity, and confusion in digital journalism studies.Keywords: evidence, evidence forms, evidence types, taxonomy
Procedia PDF Downloads 69