Search results for: technical fields
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4158

Search results for: technical fields

798 Towards Competence-Based Regulatory Sciences Education in Sub-Saharan Africa: Identification of Competencies

Authors: Abigail Ekeigwe, Bethany McGowan, Loran C. Parker, Stephen Byrn, Kari L. Clase

Abstract:

There are growing calls in the literature to develop and implement competency-based regulatory sciences education (CBRSE) in sub-Saharan Africa to expand and create a pipeline of a competent workforce of regulatory scientists. A defined competence framework is an essential component in developing competency-based education. However, such a competence framework is not available for regulatory scientists in sub-Saharan Africa. The purpose of this research is to identify entry-level competencies for inclusion in a competency framework for regulatory scientists in sub-Saharan Africa as a first step in developing CBRSE. The team systematically reviewed the literature following the PRISMA guidelines for systematic reviews and based on a pre-registered protocol on Open Science Framework (OSF). The protocol has the search strategy and the inclusion and exclusion criteria for publications. All included publications were coded to identify entry-level competencies for regulatory scientists. The team deductively coded the publications included in the study using the 'framework synthesis' model for systematic literature review. The World Health Organization’s conceptualization of competence guided the review and thematic synthesis. Topic and thematic codings were done using NVivo 12™ software. Based on the search strategy in the protocol, 2345 publications were retrieved. Twenty-two (n=22) of the retrieved publications met all the inclusion criteria for the research. Topic and thematic coding of the publications yielded three main domains of competence: knowledge, skills, and enabling behaviors. The knowledge domain has three sub-domains: administrative, regulatory governance/framework, and scientific knowledge. The skills domain has two sub-domains: functional and technical skills. Identification of competencies is the primal step that serves as a bedrock for curriculum development and competency-based education. The competencies identified in this research will help policymakers, educators, institutions, and international development partners design and implement competence-based regulatory science education in sub-Saharan Africa, ultimately leading to access to safe, quality, and effective medical products.

Keywords: competence-based regulatory science education, competencies, systematic review, sub-Saharan Africa

Procedia PDF Downloads 176
797 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 73
796 Geochemical Study of the Bound Hydrocarbon in the Asphaltene of Biodegraded Oils of Cambay Basin

Authors: Sayani Chatterjee, Kusum Lata Pangtey, Sarita Singh, Harvir Singh

Abstract:

Biodegradation leads to a systematic alteration of the chemical and physical properties of crude oil showing sequential depletion of n-alkane, cycloalkanes, aromatic which increases its specific gravity, viscosity and the abundance of heteroatom-containing compounds. The biodegradation leads to a change in the molecular fingerprints and geochemical parameters of degraded oils, thus make source and maturity identification inconclusive or ambiguous. Asphaltene is equivalent to the most labile part of the respective kerogen and generally has high molecular weight. Its complex chemical structure with substantial microporous units makes it suitable to occlude the hydrocarbon expelled from the source. The occluded molecules are well preserved by the macromolecular structure and thus prevented from secondary alterations. They retain primary organic geochemical information over the geological time. The present study involves the extraction of this occluded hydrocarbon from the asphaltene cage through mild oxidative degradation using mild oxidative reagents like Hydrogen Peroxide (H₂O₂) and Acetic Acid (CH₃COOH) on purified asphaltene of the biodegraded oils of Mansa, Lanwa and Santhal fields in Cambay Basin. The study of these extracted occluded hydrocarbons was carried out for establishing oil to oil and oil to source correlation in the Mehsana block of Cambay Basin. The n-alkane and biomarker analysis through GC and GC-MS of these occluded hydrocarbons show similar biomarker imprint as the normal oil in the area and hence correlatable with them. The abundance of C29 steranes, presence of Oleanane, Gammacerane and 4-Methyl sterane depicts that the oils are derived from terrestrial organic matter deposited in the stratified saline water column in the marine environment with moderate maturity (VRc 0.6-0.8). The oil source correlation study suggests that the oils are derived from Jotana-Warosan Low area. The developed geochemical technique to extract the occluded hydrocarbon has effectively resolved the ambiguity that resulted from the inconclusive fingerprint of the biodegraded oil and the method can be also applied in other biodegraded oils as well.

Keywords: asphaltene, biomarkers, correlation, mild oxidation, occluded hydrocarbon

Procedia PDF Downloads 143
795 Reverse Engineering of a Secondary Structure of a Helicopter: A Study Case

Authors: Jose Daniel Giraldo Arias, Camilo Rojas Gomez, David Villegas Delgado, Gullermo Idarraga Alarcon, Juan Meza Meza

Abstract:

The reverse engineering processes are widely used in the industry with the main goal to determine the materials and the manufacture used to produce a component. There are a lot of characterization techniques and computational tools that are used in order to get this information. A study case of a reverse engineering applied to a secondary sandwich- hybrid type structure used in a helicopter is presented. The methodology used consists of five main steps, which can be applied to any other similar component: Collect information about the service conditions of the part, disassembly and dimensional characterization, functional characterization, material properties characterization and manufacturing processes characterization, allowing to obtain all the supports of the traceability of the materials and processes of the aeronautical products that ensure their airworthiness. A detailed explanation of each step is covered. Criticality and comprehend the functionalities of each part, information of the state of the art and information obtained from interviews with the technical groups of the helicopter’s operators were analyzed,3D optical scanning technique, standard and advanced materials characterization techniques and finite element simulation allow to obtain all the characteristics of the materials used in the manufacture of the component. It was found that most of the materials are quite common in the aeronautical industry, including Kevlar, carbon, and glass fibers, aluminum honeycomb core, epoxy resin and epoxy adhesive. The stacking sequence and volumetric fiber fraction are a critical issue for the mechanical behavior; a digestion acid method was used for this purpose. This also helps in the determination of the manufacture technique which for this case was Vacuum Bagging. Samples of the material were manufactured and submitted to mechanical and environmental tests. These results were compared with those obtained during reverse engineering, which allows concluding that the materials and manufacture were correctly determined. Tooling for the manufacture was designed and manufactured according to the geometry and manufacture process requisites. The part was manufactured and the mechanical, and environmental tests required were also performed. Finally, a geometric characterization and non-destructive techniques allow verifying the quality of the part.

Keywords: reverse engineering, sandwich-structured composite parts, helicopter, mechanical properties, prototype

Procedia PDF Downloads 395
794 Elements of Critical Event Management: A Qualitative Study of Trauma Teams

Authors: Tan Xin Zhong Timothy, Chang Chen Jie Victor, Yew Kwan Tong, Lim Geok Peng Sandy

Abstract:

Background: Leaders in crisis response teams such as Trauma Teams in hospitals are essential to the effective coordination and direction of the team. The response to emergency trauma situations must be accurate, rapid, and well executed. To this end, the team leader’s social, technical and leadership skills are essential factors that implicate the success of an emergency trauma intervention. While each emergency trauma case varies in severity and complexity, and the experience and expertise of team leaders may vary, it would be productive to identify certain coordinative and directive functions that improve the capacity for leading a team. Methods: This qualitative study of Trauma Team physicians in Singapore General Hospital (SGH) involved 50 in-depth interviews with doctors and nurses involved in Trauma Team activations, observations of Trauma Teams managing emergency patients, and reviews of audio/video recordings of 65 trauma activations. The interviews were conducted with doctors of various ranks across the relevant departments, 12 from the Emergency Department (ED), 11 from General Surgery (GS) and 8 from Orthopaedics, while the 6 nurses were from ED. In accordance with the grounded theory approach, the content of the interviews was coded and analysed in order to derive broad leadership themes that corresponded with certain behavioural traits exhibited by trauma team leaders, supplemented with the observational and audio/video data. Results: The leadership behaviours of the team leaders could be typified into three broad categories: team orientation, engagement and activeness. Team orientation corresponds with the source and form of cognitive responsibility, decision-making and informational contributions, divisible into individualistic and consultative sub-categories. Engagement refers to the type of activity that leaders prefer to engage in, and which implicates their attentional focus, divisible into participatory and supervisory sub-categories. Activeness is a function of the leader’s attitudes towards the behavioural regulation of the team, which manifests in inactivity or activity to augment or merely align with protocol. These factors are not exhaustive and are contextually sensitive, but collectively implicate a significant portion of the leadership activity observed in trauma teams.

Keywords: trauma team activations, critical event management, leadership, teamwork

Procedia PDF Downloads 308
793 Influence of Nanomaterials on the Properties of Shape Memory Polymeric Materials

Authors: Katielly Vianna Polkowski, Rodrigo Denizarte de Oliveira Polkowski, Cristiano Grings Herbert

Abstract:

The use of nanomaterials in the formulation of polymeric materials modifies their molecular structure, offering an infinite range of possibilities for the development of smart products, being of great importance for science and contemporary industry. Shape memory polymers are generally lightweight, have high shape recovery capabilities, they are easy to process and have properties that can be adapted for a variety of applications. Shape memory materials are active materials that have attracted attention due to their superior damping properties when compared to conventional structural materials. The development of methodologies capable of preparing new materials, which use graphene in their structure, represents technological innovation that transforms low-cost products into advanced materials with high added value. To obtain an improvement in the shape memory effect (SME) of polymeric materials, it is possible to use graphene in its composition containing low concentration by mass of graphene nanoplatelets (GNP), graphene oxide (GO) or other functionalized graphene, via different mixture process. As a result, there was an improvement in the SME, regarding the increase in the values of maximum strain. In addition, the use of graphene contributes to obtaining nanocomposites with superior electrical properties, greater crystallinity, as well as resistance to material degradation. The methodology used in the research is Systematic Review, scientific investigation, gathering relevant studies on influence of nanomaterials on the properties of shape memory polymeric, using the literature database as a source and study methods. In the present study, a systematic reviewwas performed of all papers published from 2014 to 2022 regarding graphene and shape memory polymeric througha search of three databases. This study allows for easy identification of themost relevant fields of study with respect to graphene and shape memory polymeric, as well as the main gaps to beexplored in the literature. The addition of graphene showed improvements in obtaining higher values of maximum deformation of the material, attributed to a possible slip between stacked or agglomerated nanostructures, as well as an increase in stiffness due to the increase in the degree of phase separation that results in a greater amount physical cross-links, referring to the formation of shortrange rigid domains.

Keywords: graphene, shape memory, smart materials, polymers, nanomaterials

Procedia PDF Downloads 71
792 From Shallow Semantic Representation to Deeper One: Verb Decomposition Approach

Authors: Aliaksandr Huminski

Abstract:

Semantic Role Labeling (SRL) as shallow semantic parsing approach includes recognition and labeling arguments of a verb in a sentence. Verb participants are linked with specific semantic roles (Agent, Patient, Instrument, Location, etc.). Thus, SRL can answer on key questions such as ‘Who’, ‘When’, ‘What’, ‘Where’ in a text and it is widely applied in dialog systems, question-answering, named entity recognition, information retrieval, and other fields of NLP. However, SRL has the following flaw: Two sentences with identical (or almost identical) meaning can have different semantic role structures. Let consider 2 sentences: (1) John put butter on the bread. (2) John buttered the bread. SRL for (1) and (2) will be significantly different. For the verb put in (1) it is [Agent + Patient + Goal], but for the verb butter in (2) it is [Agent + Goal]. It happens because of one of the most interesting and intriguing features of a verb: Its ability to capture participants as in the case of the verb butter, or their features as, say, in the case of the verb drink where the participant’s feature being liquid is shared with the verb. This capture looks like a total fusion of meaning and cannot be decomposed in direct way (in comparison with compound verbs like babysit or breastfeed). From this perspective, SRL looks really shallow to represent semantic structure. If the key point in semantic representation is an opportunity to use it for making inferences and finding hidden reasons, it assumes by default that two different but semantically identical sentences must have the same semantic structure. Otherwise we will have different inferences from the same meaning. To overcome the above-mentioned flaw, the following approach is suggested. Assume that: P is a participant of relation; F is a feature of a participant; Vcp is a verb that captures a participant; Vcf is a verb that captures a feature of a participant; Vpr is a primitive verb or a verb that does not capture any participant and represents only a relation. In another word, a primitive verb is a verb whose meaning does not include meanings from its surroundings. Then Vcp and Vcf can be decomposed as: Vcp = Vpr +P; Vcf = Vpr +F. If all Vcp and Vcf will be represented this way, then primitive verbs Vpr can be considered as a canonical form for SRL. As a result of that, there will be no hidden participants caught by a verb since all participants will be explicitly unfolded. An obvious example of Vpr is the verb go, which represents pure movement. In this case the verb drink can be represented as man-made movement of liquid into specific direction. Extraction and using primitive verbs for SRL create a canonical representation unique for semantically identical sentences. It leads to the unification of semantic representation. In this case, the critical flaw related to SRL will be resolved.

Keywords: decomposition, labeling, primitive verbs, semantic roles

Procedia PDF Downloads 350
791 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 200
790 Ionic Liquids as Substrates for Metal-Organic Framework Synthesis

Authors: Julian Mehler, Marcus Fischer, Martin Hartmann, Peter S. Schulz

Abstract:

During the last two decades, the synthesis of metal-organic frameworks (MOFs) has gained ever increasing attention. Based on their pore size and shape as well as host-guest interactions, they are of interest for numerous fields related to porous materials, like catalysis and gas separation. Usually, MOF-synthesis takes place in an organic solvent between room temperature and approximately 220 °C, with mixtures of polyfunctional organic linker molecules and metal precursors as substrates. Reaction temperatures above the boiling point of the solvent, i.e. solvothermal reactions, are run in autoclaves or sealed glass vessels under autogenous pressures. A relatively new approach for the synthesis of MOFs is the so-called ionothermal synthesis route. It applies an ionic liquid as a solvent, which can serve as a structure-directing template and/or a charge-compensating agent in the final coordination polymer structure. Furthermore, this method often allows for less harsh reaction conditions than the solvothermal route. Here a variation of the ionothermal approach is reported, where the ionic liquid also serves as an organic linker source. By using 1-ethyl-3-methylimidazolium terephthalates ([EMIM][Hbdc] and [EMIM]₂[bdc]), the one-step synthesis of MIL-53(Al)/Boehemite composites with interesting features is possible. The resulting material is already formed at moderate temperatures (90-130 °C) and is stabilized in the usually unfavored ht-phase. Additionally, in contrast to already published procedures for MIL-53(Al) synthesis, no further activation at high temperatures is mandatory. A full characterization of this novel composite material is provided, including XRD, SS-NMR, El-Al., SEM as well as sorption measurements and its interesting features are compared to MIL-53(Al) samples produced by the classical solvothermal route. Furthermore, the syntheses of the applied ionic liquids and salts is discussed. The influence of the degree of ionicity of the linker source [EMIM]x[H(2-x)bdc] on the crystal structure and the achievable synthesis temperature are investigated and give insight into the role of the IL during synthesis. Aside from the synthesis of MIL-53 from EMIM terephthalates, the use of the phosphonium cation in this approach is discussed as well. Additionally, the employment of ILs in the preparation of other MOFs is presented briefly. This includes the ZIF-4 framework from the respective imidazolate ILs and chiral camphorate based frameworks from their imidazolium precursors.

Keywords: ionic liquids, ionothermal synthesis, material synthesis, MIL-53, MOFs

Procedia PDF Downloads 191
789 International Retirement Migration of Westerners to Thailand: Well-Being and Future Migration Plans

Authors: Kanokwan Tangchitnusorn, Patcharawalai Wongboonsin

Abstract:

Following the ‘Golden Age of Welfare’ which enabled post-war prosperity to European citizens in 1950s, the world has witnessed the increasing mobility across borders of older citizens of First World countries. Then, in 1990s, the international retirement migration (IRM) of older persons has become a prominent trend, in which, it requires the integration of several fields of knowledge to explain, i.e. migration studies, tourism studies, as well as, social gerontology. However, while the studies of the IRM to developed destinations in Europe (e.g. Spain, Malta, Portugal, Italy), and the IRM to developing countries like Mexico, Panama, and Morocco have been largely studied in recent decades due to their massive migration volume, the study of the IRM to remoter destinations has been far more relatively sparse and incomplete. Developing countries in Southeast Asia have noticed the increasing number of retired expats, particularly to Thailand, where the number of foreigners applying for retirement visa increased from 10,709 in 2005 to 60,046 in 2014. Additionally, it was evident that the majority of Thailand’s retirement visa applicants were Westerners, i.e. citizens of the United Kingdom, the United States, Germany, and the Nordic countries, respectively. As such trend just becoming popular in Thailand in recent decades, little is known about the IRM populations, their well-being, and their future migration plans. This study aimed to examine the subjective wellbeing or the self-evaluations of own well-being among Western retirees in Thailand, as well as, their future migration plans as whether they planned to stay here for life or otherwise. The author employed a mixed method to obtain both quantitative and qualitative data during October 2015 – May 2016, including 330 self-administered questionnaires (246 online and 84 hard-copied responses), and 21 in-depth interviews of the Western residents in Nan (2), Pattaya (4), and Chiang Mai (15). As derived from the integration of previous subjective well-being measurements (i.e. Personal Wellbeing Index (PWI), Global AgeWatch Index, and OECD guideline on measuring subjective wellbeing), this study would measure the subjective well-being of Western retirees in Thailand in 7 dimensions, including standard of living, health status, personal relationships, social connections, environmental quality, personal security and local infrastructure.

Keywords: international retirement migration, ageing, mobility, wellbeing, Western, Thailand

Procedia PDF Downloads 327
788 Classification of Digital Chest Radiographs Using Image Processing Techniques to Aid in Diagnosis of Pulmonary Tuberculosis

Authors: A. J. S. P. Nileema, S. Kulatunga , S. H. Palihawadana

Abstract:

Computer aided detection (CAD) system was developed for the diagnosis of pulmonary tuberculosis using digital chest X-rays with MATLAB image processing techniques using a statistical approach. The study comprised of 200 digital chest radiographs collected from the National Hospital for Respiratory Diseases - Welisara, Sri Lanka. Pre-processing was done to remove identification details. Lung fields were segmented and then divided into four quadrants; right upper quadrant, left upper quadrant, right lower quadrant, and left lower quadrant using the image processing techniques in MATLAB. Contrast, correlation, homogeneity, energy, entropy, and maximum probability texture features were extracted using the gray level co-occurrence matrix method. Descriptive statistics and normal distribution analysis were performed using SPSS. Depending on the radiologists’ interpretation, chest radiographs were classified manually into PTB - positive (PTBP) and PTB - negative (PTBN) classes. Features with standard normal distribution were analyzed using an independent sample T-test for PTBP and PTBN chest radiographs. Among the six features tested, contrast, correlation, energy, entropy, and maximum probability features showed a statistically significant difference between the two classes at 95% confidence interval; therefore, could be used in the classification of chest radiograph for PTB diagnosis. With the resulting value ranges of the five texture features with normal distribution, a classification algorithm was then defined to recognize and classify the quadrant images; if the texture feature values of the quadrant image being tested falls within the defined region, it will be identified as a PTBP – abnormal quadrant and will be labeled as ‘Abnormal’ in red color with its border being highlighted in red color whereas if the texture feature values of the quadrant image being tested falls outside of the defined value range, it will be identified as PTBN–normal and labeled as ‘Normal’ in blue color but there will be no changes to the image outline. The developed classification algorithm has shown a high sensitivity of 92% which makes it an efficient CAD system and with a modest specificity of 70%.

Keywords: chest radiographs, computer aided detection, image processing, pulmonary tuberculosis

Procedia PDF Downloads 104
787 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment

Authors: Neda Orak, Mostafa Zarei

Abstract:

Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.

Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park

Procedia PDF Downloads 274
786 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector

Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar

Abstract:

Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.

Keywords: appliances efficiency improvement, energy star, market penetration, residential sector

Procedia PDF Downloads 268
785 The Scientific Phenomenon Revealed in the Holy Quran - an Update

Authors: Arjumand Warsy

Abstract:

The Holy Quran was revealed to Prophet Mohammad (May Peace and Blessings of Allah be upon Him) over fourteen hundred years ago, at a time when majority of the people in Arabia were illiterate and very few could read or write. Any knowledge about medicine, anatomy, biology, astronomy, physics, geology, geophysics or other sciences were almost non-existent. Many superstitious and groundless believes were prevalent and these believes were passed down through past generations. At that time, the Holy Quran was revealed and it presented several phenomenon that have been only currently unveiled, as scientists have worked endlessly to provide explanation for these physical and biological phenomenon applying scientific technologies. Many important discoveries were made during the 20th century and it is interesting to note that many of these discoveries were already present in the Holy Quran fourteen hundred years ago. The Scientific phenomenon, mentioned in the Holy Quran, cover many different fields in biological and physical sciences and have been the source of guidance for a number of scientists. A perfect description of the creation of the universe, the orbits in space, the development process, development of hearing process prior to sight, importance of the skin in sensing pain, uniqueness of fingerprints, role of males in selection of the sex of the baby, are just a few of the many facts present in the Quran that have astonished many scientists. The Quran in Chapter 20, verse 50 states: قَالَ رَبُّنَا الَّذِيۤ اَعْطٰى كُلَّ شَيْءٍ خَلْقَهٗ ثُمَّ هَدٰى ۰۰ (He said "Our Lord is He, Who has given a distinctive form to everything and then guided it aright”). Explaining this brief statement in the light of the modern day Molecular Genetics unveils the entire genetic basis of life and how guidance is stored in the genetic material (DNA) present in the nucleus. This thread like structure, made of only six molecules (sugar, phosphate, adenine, thymine, cytosine and guanine), is so brilliantly structured by the Creator that it holds all the information about each and every living thing, whether it is viruses, bacteria, fungi, plants, animals or humans or any other living being. This paper will present an update on some of the physical and biological phenomena’ presented in the Holy Quran, unveiled using advanced technologies during the last century and will discuss how the need to incorporate this information in the curricula.

Keywords: The Holy Quran, scientific facts, curriculum, Muslims

Procedia PDF Downloads 342
784 Design and Synthesis of Copper Doped Zeolite Composite for Antimicrobial Activity and Heavy Metal Removal from Waste Water

Authors: Feleke Terefe Fanta

Abstract:

The existence of heavy metals and microbial contaminants in aquatic system of Akaki river basin, a sub city of Addis Ababa, has become a public concern as human population increases and land development continues. This is because effluents from chemical and pharmaceutical industries are directly discharged onto surrounding land, irrigation fields and surface water bodies. In the present study, we synthesised zeolites and copper- zeolite composite based adsorbent through cost effective and simple approach to mitigate the problem. The study presents determination of heavy metal content and microbial contamination level of waste water sample collected from Akaki river using zeolites and copper- doped zeolites as adsorbents. The synthesis of copper- zeolite X composite was carried out by ion exchange method of copper ions into zeolites frameworks. The optimum amount of copper ions loaded into the zeolites frameworks were studied using the pore size determination concept via iodine test. The copper- loaded zeolites were characterized by X-ray diffraction (XRD). The XRD analysis showed clear difference in phase purity of zeolite before and after copper ion exchange. The concentration of Cd, Cr, and Pb were determined in waste water sample using atomic absorption spectrophotometry. The mean concentrations of Cd, Cr, and Pb in untreated sample were 0.795, 0.654 and 0.7025 mg/L respectively. The concentration of Cd, Cr, and Pb decreased to 0.005, 0.052 and BDL mg/L for sample treated with bare zeolite X while a further decrease in concentration of Cd, Cr, and Pb (0.005, BDL and BDL) mg/L respectively was observed for the sample treated with copper- zeolite composite. The antimicrobial activity was investigated by exposing the total coliform to the Zeolite X and Copper-modified Zeolite X. Zeolite X and Copper-modified Zeolite X showed complete elimination of microbilas after 90 and 50 minutes contact time respectively. This demonstrates effectiveness of copper- zeolite composite as efficient disinfectant. To understand the mode of heavy metals removal and antimicrobial activity of the copper-loaded zeolites; the adsorbent dose, contact time, temperature was studied. Overall, the results obtained in this study showed high antimicrobial disinfection and heavy metal removal efficiencies of the synthesized adsorbent.

Keywords: waste water, copper doped zeolite x, adsorption heavy metal, disinfection

Procedia PDF Downloads 61
783 “I” on the Web: Social Penetration Theory Revised

Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology

Abstract:

The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.

Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information

Procedia PDF Downloads 350
782 A Survey of Mental and Personality Profiles of Malingerer Clients of an Iranian Forensic Medicine Center Based on the Revised NEO Personality Inventory and the Minnesota Multiphasic Personality Inventory Questionnaires

Authors: Morteza Rahbar Taramsari, Arya Mahdavi Baramchi, Mercedeh Enshaei, Ghazaleh Keshavarzi Baramchi

Abstract:

Introduction: Malingering is one of the most challenging issues in the forensic psychology and imposes a heavy financial burden on health care and legal systems. It seems that some mental and personality abnormalities might have a crucial role in developing this condition. Materials and Methods: In this cross-sectional study, we aimed to assess 100 malingering clients of Gilan province general office of forensic medicine, all filled the related questionnaires. The data about some psychometric characteristics were collected through the 71-items version- short form- of Minnesota Multiphasic Personality Inventory (MMPI) questionnaire and the personality traits were assessed by NEO Personality Inventory-Revised (NEO PI-R) - including 240 items- as a reliable and accurate measure of the five domains of personality. Results: The 100 malingering clients (55 males and 45 females) ranged from 23 to 45 (32+/- 5.6) years old. Regarding marital status, 36% were single, 57% were married and 7% were divorced. Almost two-thirds of the participants (64%) were unemployed, 21% were self-employed and the rest of them were employed. The data of MMPI clinical scales revealed that the mean (SD) T score of Hypochondrias (Hs) was 67(9.2), Depression (D) was 87(7.9), Hysteria (Hy) was 74(5.8), Psychopathic Deviate (Pd) was 62(8.5), Masculinity-Feminity (MF) was 76(8.4), Paranoia (Pa) was 62(4.5), Psychasthenia (Pt) was 80(7.9), Schizophrenia (Sc) was 69(6.8), Hypomania (Ma) was 64(5.9)and Social Introversion (Si) was 58(4.3). NEO PI-R test showed five domains of personality. The mean (SD) T score of Neuroticism was 65(9.2), Extraversion was 51(7.9), Openness was 43(5.8), Agreeableness was 35(3.4) and Conscientiousness was 42(4.9). Conclusion: According to MMPI test in our malingering clients, Hypochondriasis (Hs), depression (D), Hysteria (Hy), Muscularity-Feminity (MF), Psychasthenia (Pt) and Schizophrenia (Sc) had high scores (T >= 65) which means pathological range and psychological significance. Based on NEO PI-R test Neuroticism was in high range, on the other hand, Openness, Agreeableness, and Conscientiousness were in low range. Extroversion was in average range. So it seems that malingerers require basic evaluations of different psychological fields. Additional research in this area is needed to provide stronger evidence of the possible positive effects of the mentioned factors on malingering.

Keywords: malingerers, mental profile, MMPI, NEO PI-R, personality profile

Procedia PDF Downloads 239
781 Data Model to Predict Customize Skin Care Product Using Biosensor

Authors: Ashi Gautam, Isha Shukla, Akhil Seghal

Abstract:

Biosensors are analytical devices that use a biological sensing element to detect and measure a specific chemical substance or biomolecule in a sample. These devices are widely used in various fields, including medical diagnostics, environmental monitoring, and food analysis, due to their high specificity, sensitivity, and selectivity. In this research paper, a machine learning model is proposed for predicting the suitability of skin care products based on biosensor readings. The proposed model takes in features extracted from biosensor readings, such as biomarker concentration, skin hydration level, inflammation presence, sensitivity, and free radicals, and outputs the most appropriate skin care product for an individual. This model is trained on a dataset of biosensor readings and corresponding skin care product information. The model's performance is evaluated using several metrics, including accuracy, precision, recall, and F1 score. The aim of this research is to develop a personalised skin care product recommendation system using biosensor data. By leveraging the power of machine learning, the proposed model can accurately predict the most suitable skin care product for an individual based on their biosensor readings. This is particularly useful in the skin care industry, where personalised recommendations can lead to better outcomes for consumers. The developed model is based on supervised learning, which means that it is trained on a labeled dataset of biosensor readings and corresponding skin care product information. The model uses these labeled data to learn patterns and relationships between the biosensor readings and skin care products. Once trained, the model can predict the most suitable skin care product for an individual based on their biosensor readings. The results of this study show that the proposed machine learning model can accurately predict the most appropriate skin care product for an individual based on their biosensor readings. The evaluation metrics used in this study demonstrate the effectiveness of the model in predicting skin care products. This model has significant potential for practical use in the skin care industry for personalised skin care product recommendations. The proposed machine learning model for predicting the suitability of skin care products based on biosensor readings is a promising development in the skin care industry. The model's ability to accurately predict the most appropriate skin care product for an individual based on their biosensor readings can lead to better outcomes for consumers. Further research can be done to improve the model's accuracy and effectiveness.

Keywords: biosensors, data model, machine learning, skin care

Procedia PDF Downloads 80
780 Verification of Dosimetric Commissioning Accuracy of Flattening Filter Free Intensity Modulated Radiation Therapy and Volumetric Modulated Therapy Delivery Using Task Group 119 Guidelines

Authors: Arunai Nambi Raj N., Kaviarasu Karunakaran, Krishnamurthy K.

Abstract:

The purpose of this study was to create American Association of Physicist in Medicine (AAPM) Task Group 119 (TG 119) benchmark plans for flattening filter free beam (FFF) deliveries of intensity modulated radiation therapy (IMRT) and volumetric arc therapy (VMAT) in the Eclipse treatment planning system. The planning data were compared with the flattening filter (FF) IMRT & VMAT plan data to verify the dosimetric commissioning accuracy of FFF deliveries. AAPM TG 119 proposed a set of test cases called multi-target, mock prostate, mock head and neck, and C-shape to ascertain the overall accuracy of IMRT planning, measurement, and analysis. We used these test cases to investigate the performance of the Eclipse Treatment planning system for the flattening filter free beam deliveries. For these test cases, we generated two sets of treatment plans, the first plan using 7–9 IMRT fields and a second plan utilizing two arc VMAT technique for both the beam deliveries (6 MV FF, 6MV FFF, 10 MV FF and 10 MV FFF). The planning objectives and dose were set as described in TG 119. The dose prescriptions for multi-target, mock prostate, mock head and neck, and C-shape were taken as 50, 75.6, 50 and 50 Gy, respectively. The point dose (mean dose to the contoured chamber volume) at the specified positions/locations was measured using compact (CC‑13) ion chamber. The composite planar dose and per-field gamma analysis were measured with IMatriXX Evaluation 2D array with OmniPro IMRT Software (version 1.7b). FFF beam deliveries of IMRT and VMAT plans were comparable to flattening filter beam deliveries. Our planning and quality assurance results matched with TG 119 data. AAPM TG 119 test cases are useful to generate FFF benchmark plans. From the obtained data in this study, we conclude that the commissioning of FFF IMRT and FFF VMAT delivery were found within the limits of TG-119 and the performance of the Eclipse treatment planning system for FFF plans were found satisfactorily.

Keywords: flattening filter free beams, intensity modulated radiation therapy, task group 119, volumetric modulated arc therapy

Procedia PDF Downloads 132
779 Investigation for Pixel-Based Accelerated Aging of Large Area Picosecond Photo-Detectors

Authors: I. Tzoka, V. A. Chirayath, A. Brandt, J. Asaadi, Melvin J. Aviles, Stephen Clarke, Stefan Cwik, Michael R. Foley, Cole J. Hamel, Alexey Lyashenko, Michael J. Minot, Mark A. Popecki, Michael E. Stochaj, S. Shin

Abstract:

Micro-channel plate photo-multiplier tubes (MCP-PMTs) have become ubiquitous and are widely considered potential candidates for next generation High Energy Physics experiments due to their picosecond timing resolution, ability to operate in strong magnetic fields, and low noise rates. A key factor that determines the applicability of MCP-PMTs in their lifetime, especially when they are used in high event rate experiments. We have developed a novel method for the investigation of the aging behavior of an MCP-PMT on an accelerated basis. The method involves exposing a localized region of the MCP-PMT to photons at a high repetition rate. This pixel-based method was inspired by earlier results showing that damage to the photocathode of the MCP-PMT occurs primarily at the site of light exposure and that the surrounding region undergoes minimal damage. One advantage of the pixel-based method is that it allows the dynamics of photo-cathode damage to be studied at multiple locations within the same MCP-PMT under different operating conditions. In this work, we use the pixel-based accelerated lifetime test to investigate the aging behavior of a 20 cm x 20 cm Large Area Picosecond Photo Detector (LAPPD) manufactured by INCOM Inc. at multiple locations within the same device under different operating conditions. We compare the aging behavior of the MCP-PMT obtained from the first lifetime test conducted under high gain conditions to the lifetime obtained at a different gain. Through this work, we aim to correlate the lifetime of the MCP-PMT and the rate of ion feedback, which is a function of the gain of each MCP, and which can also vary from point to point across a large area (400 $cm^2$) MCP. The tests were made possible by the uniqueness of the LAPPD design, which allows independent control of the gain of the chevron stacked MCPs. We will further discuss the implications of our results for optimizing the operating conditions of the detector when used in high event rate experiments.

Keywords: electron multipliers (vacuum), LAPPD, lifetime, micro-channel plate photo-multipliers tubes, photoemission, time-of-flight

Procedia PDF Downloads 150
778 Solomon Islands Decentralization Efforts

Authors: Samson Viulu, Hugo Hebala, Duddley Kopu

Abstract:

Constituency Development Fund (CDF) is a controversial fund that has existed in the Solomon Islands since the early 90s to date. It is largely controversial because it is directly handled by members of parliament (MPs) of the Solomon Islands legislation chamber. It is commonly described as a political slash fund because only voters of MPs benefit from it to retain loyalty. The CDF was established by a legislative act in 2013; however, it does not have any subsidiary regulations to it, therefore, very weak governance. CDF is purposely to establish development projects in the rural areas of the Solomon Islands to spur economic growth. Although almost USD500M was spent in CDF in the last decade, there has been no growth in the economy of the Solomon Islands; rather, a regress. Solomon Islands has now formulated a first home-grown policy aimed at guiding the overall development of the fifty constituencies, improving delivery mechanisms of the CDF, and strengthening its governance through the regulation of the CDF Act 2013. The Solomon Islands Constituency Development Policy is the first for the country since gaining independence in 1978 and gives strong emphasis on a cross-sectoral approach through effective partnerships and collaborations and decentralizing government services to the isolated rural areas of the country. The new policy is driving the efforts of the political government to decentralize government services to isolated rural communities to encourage the participation of rural dwellers in economic activities. The decentralization will see the establishment of constituency offices within all constituencies and the piloting of townships in constituencies that have met the statutory requirements of the state. It also encourages constituencies to become development agents of the national government than being mere political boundaries. The decentralization will go in line with the establishment of the Solomon Islands Special Economic Zones (SEZ), where investors will be given special privileges and exemptions from government taxes and permits to attract tangible development to occur in rural constituencies. The design and formulation of the new development policy are supported by the UNDP office in the Solomon Islands. The new policy is promoting a reorientation on the allocation of resources more toward the productive and resource sectors, making access to finance easier for entrepreneurs and encouraging growth in rural entrepreneurship in the fields of agriculture, fisheries, down streaming, and tourism across the Solomon Islands. This new policy approach will greatly assist the country to graduate from the least developed countries status in a few years’ time.

Keywords: decentralization, constituency development fund, Solomon Islands constituency development policy, partnership, entrepreneurship

Procedia PDF Downloads 64
777 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 131
776 Analyzing the Risk Based Approach in General Data Protection Regulation: Basic Challenges Connected with Adapting the Regulation

Authors: Natalia Kalinowska

Abstract:

The adoption of the General Data Protection Regulation, (GDPR) finished the four-year work of the European Commission in this area in the European Union. Considering far-reaching changes, which will be applied by GDPR, the European legislator envisaged two-year transitional period. Member states and companies have to prepare for a new regulation until 25 of May 2018. The idea, which becomes a new look at an attitude to data protection in the European Union is risk-based approach. So far, as a result of implementation of Directive 95/46/WE, in many European countries (including Poland) there have been adopted very particular regulations, specifying technical and organisational security measures e.g. Polish implementing rules indicate even how long password should be. According to the new approach from May 2018, controllers and processors will be obliged to apply security measures adequate to level of risk associated with specific data processing. The risk in GDPR should be interpreted as the likelihood of a breach of the rights and freedoms of the data subject. According to Recital 76, the likelihood and severity of the risk to the rights and freedoms of the data subject should be determined by reference to the nature, scope, context and purposes of the processing. GDPR does not indicate security measures which should be applied – in recitals there are only examples such as anonymization or encryption. It depends on a controller’s decision what type of security measures controller considered as sufficient and he will be responsible if these measures are not sufficient or if his identification of risk level is incorrect. Data protection regulation indicates few levels of risk. Recital 76 indicates risk and high risk, but some lawyers think, that there is one more category – low risk/now risk. Low risk/now risk data processing is a situation when it is unlikely to result in a risk to the rights and freedoms of natural persons. GDPR mentions types of data processing when a controller does not have to evaluate level of risk because it has been classified as „high risk” processing e.g. processing on a large scale of special categories of data, processing with using new technologies. The methodology will include analysis of legal regulations e.g. GDPR, the Polish Act on the Protection of personal data. Moreover: ICO Guidelines and articles concerning risk based approach in GDPR. The main conclusion is that an appropriate risk assessment is a key to keeping data safe and avoiding financial penalties. On the one hand, this approach seems to be more equitable, not only for controllers or processors but also for data subjects, but on the other hand, it increases controllers’ uncertainties in the assessment which could have a direct impact on incorrect data protection and potential responsibility for infringement of regulation.

Keywords: general data protection regulation, personal data protection, privacy protection, risk based approach

Procedia PDF Downloads 240
775 Design of Identification Based Adaptive Control for Fermentation Process in Bioreactor

Authors: J. Ritonja

Abstract:

The biochemical technology has been developing extremely fast since the middle of the last century. The main reason for such development represents a requirement for large production of high-quality biologically manufactured products such as pharmaceuticals, foods, and beverages. The impact of the biochemical industry on the world economy is enormous. The great importance of this industry also results in intensive development in scientific disciplines relevant to the development of biochemical technology. In addition to developments in the fields of biology and chemistry, which enable to understand complex biochemical processes, development in the field of control theory and applications is also very important. In the paper, the control for the biochemical reactor for the milk fermentation was studied. During the fermentation process, the biophysical quantities must be precisely controlled to obtain the high-quality product. To control these quantities, the bioreactor’s stirring drive and/or heating system can be used. Available commercial biochemical reactors are equipped with open loop or conventional linear closed loop control system. Due to the outstanding parameters variations and the partial nonlinearity of the biochemical process, the results obtained with these control systems are not satisfactory. To improve the fermentation process, the self-tuning adaptive control system was proposed. The use of the self-tuning adaptive control is suggested because the parameters’ variations of the studied biochemical process are very slow in most cases. To determine the linearized mathematical model of the fermentation process, the recursive least square identification method was used. Based on the obtained mathematical model the linear quadratic regulator was tuned. The parameters’ identification and the controller’s synthesis are executed on-line and adapt the controller’s parameters to the fermentation process’ dynamics during the operation. The use of the proposed combination represents the original solution for the control of the milk fermentation process. The purpose of the paper is to contribute to the progress of the control systems for the biochemical reactors. The proposed adaptive control system was tested thoroughly. From the obtained results it is obvious that the proposed adaptive control system assures much better following of the reference signal as a conventional linear control system with fixed control parameters.

Keywords: adaptive control, biochemical reactor, linear quadratic regulator, recursive least square identification

Procedia PDF Downloads 105
774 Optimal Allocation of Battery Energy Storage Considering Stiffness Constraints

Authors: Felipe Riveros, Ricardo Alvarez, Claudia Rahmann, Rodrigo Moreno

Abstract:

Around the world, many countries have committed to a decarbonization of their electricity system. Under this global drive, converter-interfaced generators (CIG) such as wind and photovoltaic generation appear as cornerstones to achieve these energy targets. Despite its benefits, an increasing use of CIG brings several technical challenges in power systems, especially from a stability viewpoint. Among the key differences are limited short circuit current capacity, inertia-less characteristic of CIG, and response times within the electromagnetic timescale. Along with the integration of CIG into the power system, one enabling technology for the energy transition towards low-carbon power systems is battery energy storage systems (BESS). Because of the flexibility that BESS provides in power system operation, its integration allows for mitigating the variability and uncertainty of renewable energies, thus optimizing the use of existing assets and reducing operational costs. Another characteristic of BESS is that they can also support power system stability by injecting reactive power during the fault, providing short circuit currents, and delivering fast frequency response. However, most methodologies for sizing and allocating BESS in power systems are based on economic aspects and do not exploit the benefits that BESSs can offer to system stability. In this context, this paper presents a methodology for determining the optimal allocation of battery energy storage systems (BESS) in weak power systems with high levels of CIG. Unlike traditional economic approaches, this methodology incorporates stability constraints to allocate BESS, aiming to mitigate instability issues arising from weak grid conditions with low short-circuit levels. The proposed methodology offers valuable insights for power system engineers and planners seeking to maintain grid stability while harnessing the benefits of renewable energy integration. The methodology is validated in the reduced Chilean electrical system. The results show that integrating BESS into a power system with high levels of CIG with stability criteria contributes to decarbonizing and strengthening the network in a cost-effective way while sustaining system stability. This paper potentially lays the foundation for understanding the benefits of integrating BESS in electrical power systems and coordinating their placements in future converter-dominated power systems.

Keywords: battery energy storage, power system stability, system strength, weak power system

Procedia PDF Downloads 45
773 Evidence of a Negativity Bias in the Keywords of Scientific Papers

Authors: Kseniia Zviagintseva, Brett Buttliere

Abstract:

Science is fundamentally a problem-solving enterprise, and scientists pay more attention to the negative things, that cause them dissonance and negative affective state of uncertainty or contradiction. While this is agreed upon by philosophers of science, there are few empirical demonstrations. Here we examine the keywords from those papers published by PLoS in 2014 and show with several sentiment analyzers that negative keywords are studied more than positive keywords. Our dataset is the 927,406 keywords of 32,870 scientific articles in all fields published in 2014 by the journal PLOS ONE (collected from Altmetric.com). Counting how often the 47,415 unique keywords are used, we can examine whether those negative topics are studied more than positive. In order to find the sentiment of the keywords, we utilized two sentiment analysis tools, Hu and Liu (2004) and SentiStrength (2014). The results below are for Hu and Liu as these are the less convincing results. The average keyword was utilized 19.56 times, with half of the keywords being utilized only 1 time and the maximum number of uses being 18,589 times. The keywords identified as negative were utilized 37.39 times, on average, with the positive keywords being utilized 14.72 times and the neutral keywords - 19.29, on average. This difference is only marginally significant, with an F value of 2.82, with a p of .05, but one must keep in mind that more than half of the keywords are utilized only 1 time, artificially increasing the variance and driving the effect size down. To examine more closely, we looked at those top 25 most utilized keywords that have a sentiment. Among the top 25, there are only two positive words, ‘care’ and ‘dynamics’, in position numbers 5 and 13 respectively, with all the rest being identified as negative. ‘Diseases’ is the most studied keyword with 8,790 uses, with ‘cancer’ and ‘infectious’ being the second and fourth most utilized sentiment-laden keywords. The sentiment analysis is not perfect though, as the words ‘diseases’ and ‘disease’ are split by taking 1st and 3rd positions. Combining them, they remain as the most common sentiment-laden keyword, being utilized 13,236 times. More than just splitting the words, the sentiment analyzer logs ‘regression’ and ‘rat’ as negative, and these should probably be considered false positives. Despite these potential problems, the effect is apparent, as even the positive keywords like ‘care’ could or should be considered negative, since this word is most commonly utilized as a part of ‘health care’, ‘critical care’ or ‘quality of care’ and generally associated with how to improve it. All in all, the results suggest that negative concepts are studied more, also providing support for the notion that science is most generally a problem-solving enterprise. The results also provide evidence that negativity and contradiction are related to greater productivity and positive outcomes.

Keywords: bibliometrics, keywords analysis, negativity bias, positive and negative words, scientific papers, scientometrics

Procedia PDF Downloads 167
772 Technical Option Brought Solution for Safe Waste Water Management in Urban Public Toilet and Improved Ground Water Table

Authors: Chandan Kumar

Abstract:

Background and Context: Population growth and rapid urbanization resulted nearly 2 Lacs migrants along with families moving to Delhi each year in search of jobs. Most of these poor migrant families end up living in slums and constitute an estimated population of 1.87 lacs every year. Further, more than half (52 per cent) of Delhi’s population resides in places such as unauthorized and resettled colonies. Slum population is fully dependent on public toilet to defecate. In Public toilets, manholes either connected with Sewer line or septic tank. Septic tank connected public toilet faces major challenges to dispose of waste water. They have to dispose of waste water in outside open drain and waste water struck out side of public toilet complex and near to the slum area. As a result, outbreak diseases such as Malaria, Dengue and Chikungunya in slum area due to stagnated waste water. Intervention and Innovation took place by Save the Children in 21 Public Toilet Complexes of South Delhi and North Delhi. These public toilet complexes were facing same waste water disposal problem. They were disposing of minimum 1800 liters waste water every day in open drain. Which caused stagnated water-borne diseases among the nearest community. Construction of Soak Well: Construction of soak well in urban context was an innovative approach to minimizing the problem of waste water management and increased water table of existing borewell in toilet complex. This technique made solution in Ground water recharging system, and additional water was utilized in vegetable gardening within the complex premises. Soak well had constructed with multiple filter media with inlet and safeguarding bed on surrounding surface. After construction, soak well started exhausting 2000 liters of waste water to raise ground water level through different filter media. Finally, we brought a change in the communities by constructing soak well and with zero maintenance system. These Public Toilet Complexes were empowered by safe disposing waste water mechanism and reduced stagnated water-borne diseases.

Keywords: diseases, ground water recharging system, soak well, toilet complex, waste water

Procedia PDF Downloads 533
771 Digital Transformation and Digitalization of Public Administration

Authors: Govind Kumar

Abstract:

The concept of ‘e-governance’ that was brought about by the new wave of reforms, namely ‘LPG’ in the early 1990s, has been enabling governments across the globe to digitally transform themselves. Digital transformation is leading the governments with qualitative decisions, optimization in rational use of resources, facilitation of cost-benefit analyses, and elimination of redundancy and corruption with the help of ICT-based applications interface. ICT-based applications/technologies have enormous potential for impacting positive change in the social lives of the global citizenry. Supercomputers test and analyze millions of drug molecules for developing candidate vaccines to combat the global pandemic. Further, e-commerce portals help distribute and supply household items and medicines, while videoconferencing tools provide a visual interface between the clients and hosts. Besides, crop yields are being maximized with the help of drones and machine learning, whereas satellite data, artificial intelligence, and cloud computing help governments with the detection of illegal mining, tackling deforestation, and managing freshwater resources. Such e-applications have the potential to take governance an extra mile by achieving 5 Es (effective, efficient, easy, empower, and equity) of e-governance and six Rs (reduce, reuse, recycle, recover, redesign and remanufacture) of sustainable development. If such digital transformation gains traction within the government framework, it will replace the traditional administration with the digitalization of public administration. On the other hand, it has brought in a new set of challenges, like the digital divide, e-illiteracy, technological divide, etc., and problems like handling e-waste, technological obsolescence, cyber terrorism, e-fraud, hacking, phishing, etc. before the governments. Therefore, it would be essential to bring in a rightful mixture of technological and humanistic interventions for addressing the above issues. This is on account of the reason that technology lacks an emotional quotient, and the administration does not work like technology. Both are self-effacing unless a blend of technology and a humane face are brought in into the administration. The paper will empirically analyze the significance of the technological framework of digital transformation within the government set up for the digitalization of public administration on the basis of the synthesis of two case studies undertaken from two diverse fields of administration and present a future framework of the study.

Keywords: digital transformation, electronic governance, public administration, knowledge framework

Procedia PDF Downloads 84
770 Composition Dependence of Ni 2p Core Level Shift in Fe1-xNix Alloys

Authors: Shakti S. Acharya, V. R. R. Medicherla, Rajeev Rawat, Komal Bapna, Deepnarayan Biswas, Khadija Ali, K. Maiti

Abstract:

The discovery of invar effect in 35% Ni concentration Fe1-xNix alloy has stimulated enormous experimental and theoretical research. Elemental Fe and low Ni concentration Fe1-xNix alloys which possess body centred cubic (bcc) crystal structure at ambient temperature and pressure transform to hexagonally close packed (hcp) phase at around 13 GPa. Magnetic order was found to be absent at 11K for Fe92Ni8 alloy when subjected to a high pressure of 26 GPa. The density functional theoretical calculations predicted substantial hyperfine magnetic fields, but were not observed in Mossbaur spectroscopy. The bulk modulus of fcc Fe1-xNix alloys with Ni concentration more than 35%, is found to be independent of pressure. The magnetic moment of Fe is also found be almost same in these alloys from 4 to 10 GPa pressure. Fe1-xNix alloys exhibit a complex microstructure which is formed by a series of complex phase transformations like martensitic transformation, spinodal decomposition, ordering, mono-tectoid reaction, eutectoid reaction at temperatures below 400°C. Despite the existence of several theoretical models the field is still in its infancy lacking full knowledge about the anomalous properties exhibited by these alloys. Fe1-xNix alloys have been prepared by arc melting the high purity constituent metals in argon ambient. These alloys have annealed at around 3000C in vacuum sealed quartz tube for two days to make the samples homogeneous. These alloys have been structurally characterized by x-ray diffraction and were found to exhibit a transition from bcc to fcc for x > 0.3. Ni 2p core levels of the alloys have been measured using high resolution (0.45 eV) x-ray photoelectron spectroscopy. Ni 2p core level shifts to lower binding energy with respect to that of pure Ni metal giving rise to negative core level shifts (CLSs). Measured CLSs exhibit a linear dependence in fcc region (x > 0.3) and were found to deviate slightly in bcc region (x < 0.3). ESCA potential model fails correlate CLSs with site potentials or charges in metallic alloys. CLSs in these alloys occur mainly due to shift in valence bands with composition due to intra atomic charge redistribution.

Keywords: arc melting, core level shift, ESCA potential model, valence band

Procedia PDF Downloads 366
769 Self-Esteem on University Students by Gender and Branch of Study

Authors: Antonio Casero Martínez, María de Lluch Rayo Llinas

Abstract:

This work is part of an investigation into the relationship between romantic love and self-esteem in college students, performed by the students of matter "methods and techniques of social research", of the Master Gender at the University of Balearic Islands, during 2014-2015. In particular, we have investigated the relationships that may exist between self-esteem, gender and field of study. They are known as gender differences in self-esteem, and the relationship between gender and branch of study observed annually by the distribution of enrolment in universities. Therefore, in this part of the study, we focused the spotlight on the differences in self-esteem between the sexes through the various branches of study. The study sample consists of 726 individuals (304 men and 422 women) from 30 undergraduate degrees that the University of the Balearic Islands offers on its campus in 2014-2015, academic year. The average age of men was 21.9 years and 21.7 years for women. The sampling procedure used was random sampling stratified by degree, simple affixation, giving a sampling error of 3.6% for the whole sample, with a confidence level of 95% under the most unfavorable situation (p = q). The Spanish translation of the Rosenberg Self-Esteen Scale (RSE), by Atienza, Moreno and Balaguer was applied. The psychometric properties of translation reach a test-retest reliability of 0.80 and an internal consistency between 0.76 and 0.87. In this paper we have obtained an internal consistency of 0.82. The results confirm the expected differences in self-esteem by gender, although not in all branches of study. Mean levels of self-esteem in women are lower in all branches of study, reaching statistical significance in the field of Science, Social Sciences and Law, and Engineering and Architecture. However, analysed the variability of self-esteem by the branch of study within each gender, the results show independence in the case of men, whereas in the case of women find statistically significant differences, arising from lower self-esteem of Arts and Humanities students vs. the Social and legal Sciences students. These findings confirm the results of numerous investigations in which the levels of female self-esteem appears always below the male, suggesting that perhaps we should consider separately the two populations rather than continually emphasize the difference. The branch of study, for its part has not appeared as an explanatory factor of relevance, beyond detected the largest absolute difference between gender in the technical branch, one in which women are historically a minority, ergo, are no disciplinary or academic characteristics which would explain the differences, but the differentiated social context that occurs within it.

Keywords: study branch, gender, self-esteem, applied psychology

Procedia PDF Downloads 449