Search results for: test result
213 Magnetic Solid-Phase Separation of Uranium from Aqueous Solution Using High Capacity Diethylenetriamine Tethered Magnetic Adsorbents
Authors: Amesh P, Suneesh A S, Venkatesan K A
Abstract:
The magnetic solid-phase extraction is a relatively new method among the other solid-phase extraction techniques for the separating of metal ions from aqueous solutions, such as mine water and groundwater, contaminated wastes, etc. However, the bare magnetic particles (Fe3O4) exhibit poor selectivity due to the absence of target-specific functional groups for sequestering the metal ions. The selectivity of these magnetic particles can be remarkably improved by covalently tethering the task-specific ligands on magnetic surfaces. The magnetic particles offer a number of advantages such as quick phase separation aided by the external magnetic field. As a result, the solid adsorbent can be prepared with the particle size ranging from a few micrometers to the nanometer, which again offers the advantages such as enhanced kinetics of extraction, higher extraction capacity, etc. Conventionally, the magnetite (Fe3O4) particles were prepared by the hydrolysis and co-precipitation of ferrous and ferric salts in aqueous ammonia solution. Since the covalent linking of task-specific functionalities on Fe3O4 was difficult, and it is also susceptible to redox reaction in the presence of acid or alkali, it is necessary to modify the surface of Fe3O4 by silica coating. This silica coating is usually carried out by hydrolysis and condensation of tetraethyl orthosilicate over the surface of magnetite to yield a thin layer of silica-coated magnetite particles. Since the silica-coated magnetite particles amenable for further surface modification, it can be reacted with task-specific functional groups to obtain the functionalized magnetic particles. The surface area exhibited by such magnetic particles usually falls in the range of 50 to 150 m2.g-1, which offer advantage such as quick phase separation, as compared to the other solid-phase extraction systems. In addition, the magnetic (Fe3O4) particles covalently linked on mesoporous silica matrix (MCM-41) and task-specific ligands offer further advantages in terms of extraction kinetics, high stability, longer reusable cycles, and metal extraction capacity, due to the large surface area, ample porosity and enhanced number of functional groups per unit area on these adsorbents. In view of this, the present paper deals with the synthesis of uranium specific diethylenetriamine ligand (DETA) ligand anchored on silica-coated magnetite (Fe-DETA) as well as on magnetic mesoporous silica (MCM-Fe-DETA) and studies on the extraction of uranium from aqueous solution spiked with uranium to mimic the mine water or groundwater contaminated with uranium. The synthesized solid-phase adsorbents were characterized by FT-IR, Raman, TG-DTA, XRD, and SEM. The extraction behavior of uranium on the solid-phase was studied under several conditions like the effect of pH, initial concentration of uranium, rate of extraction and its variation with pH and initial concentration of uranium, effect of interference ions like CO32-, Na+, Fe+2, Ni+2, and Cr+3, etc. The maximum extraction capacity of 233 mg.g-1 was obtained for Fe-DETA, and a huge capacity of 1047 mg.g-1 was obtained for MCM-Fe-DETA. The mechanism of extraction, speciation of uranium, extraction studies, reusability, and the other results obtained in the present study suggests Fe-DETA and MCM-Fe-DETA are the potential candidates for the extraction of uranium from mine water, and groundwater.Keywords: diethylenetriamine, magnetic mesoporous silica, magnetic solid-phase extraction, uranium extraction, wastewater treatment
Procedia PDF Downloads 168212 A Vision-Based Early Warning System to Prevent Elephant-Train Collisions
Authors: Shanaka Gunasekara, Maleen Jayasuriya, Nalin Harischandra, Lilantha Samaranayake, Gamini Dissanayake
Abstract:
One serious facet of the worsening Human-Elephant conflict (HEC) in nations such as Sri Lanka involves elephant-train collisions. Endangered Asian elephants are maimed or killed during such accidents, which also often result in orphaned or disabled elephants, contributing to the phenomenon of lone elephants. These lone elephants are found to be more likely to attack villages and showcase aggressive behaviour, which further exacerbates the overall HEC. Furthermore, Railway Services incur significant financial losses and disruptions to services annually due to such accidents. Most elephant-train collisions occur due to a lack of adequate reaction time. This is due to the significant stopping distance requirements of trains, as the full braking force needs to be avoided to minimise the risk of derailment. Thus, poor driver visibility at sharp turns, nighttime operation, and poor weather conditions are often contributing factors to this problem. Initial investigations also indicate that most collisions occur in localised “hotspots” where elephant pathways/corridors intersect with railway tracks that border grazing land and watering holes. Taking these factors into consideration, this work proposes the leveraging of recent developments in Convolutional Neural Network (CNN) technology to detect elephants using an RGB/infrared capable camera around known hotspots along the railway track. The CNN was trained using a curated dataset of elephants collected on field visits to elephant sanctuaries and wildlife parks in Sri Lanka. With this vision-based detection system at its core, a prototype unit of an early warning system was designed and tested. This weatherised and waterproofed unit consists of a Reolink security camera which provides a wide field of view and range, an Nvidia Jetson Xavier computing unit, a rechargeable battery, and a solar panel for self-sufficient functioning. The prototype unit was designed to be a low-cost, low-power and small footprint device that can be mounted on infrastructures such as poles or trees. If an elephant is detected, an early warning message is communicated to the train driver using the GSM network. A mobile app for this purpose was also designed to ensure that the warning is clearly communicated. A centralized control station manages and communicates all information through the train station network to ensure coordination among important stakeholders. Initial results indicate that detection accuracy is sufficient under varying lighting situations, provided comprehensive training datasets that represent a wide range of challenging conditions are available. The overall hardware prototype was shown to be robust and reliable. We envision a network of such units may help contribute to reducing the problem of elephant-train collisions and has the potential to act as an important surveillance mechanism in dealing with the broader issue of human-elephant conflicts.Keywords: computer vision, deep learning, human-elephant conflict, wildlife early warning technology
Procedia PDF Downloads 226211 Hydraulic Headloss in Plastic Drainage Pipes at Full and Partially Full Flow
Authors: Velitchko G. Tzatchkov, Petronilo E. Cortes-Mejia, J. Manuel Rodriguez-Varela, Jesus Figueroa-Vazquez
Abstract:
Hydraulic headloss, expressed by the values of friction factor f and Manning’s coefficient n, is an important parameter in designing drainage pipes. Their values normally are taken from manufacturer recommendations, many times without sufficient experimental support. To our knowledge, currently there is no standard procedure for hydraulically testing such pipes. As a result of research carried out at the Mexican Institute of Water Technology, a laboratory testing procedure was proposed and applied on 6 and 12 inches diameter polyvinyl chloride (PVC) and high-density dual wall polyethylene pipe (HDPE) drainage pipes. While the PVC pipe is characterized by naturally smooth interior and exterior walls, the dual wall HDPE pipe has corrugated exterior wall and, although considered smooth, a slightly wavy interior wall. The pipes were tested at full and partially full pipe flow conditions. The tests for full pipe flow were carried out on a 31.47 m long pipe at flow velocities between 0.11 and 4.61 m/s. Water was supplied by gravity from a 10 m-high tank in some of the tests, and from a 3.20 m-high tank in the rest of the tests. Pressure was measured independently with piezometer readings and pressure transducers. The flow rate was measured by an ultrasonic meter. For the partially full pipe flow the pipe was placed inside an existing 49.63 m long zero slope (horizontal) channel. The flow depth was measured by piezometers located along the pipe, for flow rates between 2.84 and 35.65 L/s, measured by a rectangular weir. The observed flow profiles were then compared to computer generated theoretical gradually varied flow profiles for different Manning’s n values. It was found that Manning’s n, that normally is assumed constant for a given pipe material, is in fact dependent on flow velocity and pipe diameter for full pipe flow, and on flow depth for partially full pipe flow. Contrary to the expected higher values of n and f for the HDPE pipe, virtually the same values were obtained for the smooth interior wall PVC pipe and the slightly wavy interior wall HDPE pipe. The explanation of this fact was found in Henry Morris’ theory for smooth turbulent conduit flow over isolated roughness elements. Following Morris, three categories of the flow regimes are possible in a rough conduit: isolated roughness (or semi smooth turbulent) flow, wake interference (or hyper turbulent) flow, and skimming (or quasi-smooth) flow. Isolated roughness flow is characterized by friction drag turbulence over the wall between the roughness elements, independent vortex generation, and dissipation around each roughness element. In this regime, the wake and vortex generation zones at each element develop and dissipate before attaining the next element. The longitudinal spacing of the roughness elements and their height are important influencing agents. Given the slightly wavy form of the HDPE pipe interior wall, the flow for this type of pipe belongs to this category. Based on that theory, an equation for the hydraulic friction factor was obtained. The obtained coefficient values are going to be used in the Mexican design standards.Keywords: drainage plastic pipes, hydraulic headloss, hydraulic friction factor, Manning’s n
Procedia PDF Downloads 281210 Tensile Behaviours of Sansevieria Ehrenbergii Fiber Reinforced Polyester Composites with Water Absorption Time
Authors: T. P. Sathishkumar, P. Navaneethakrishnan
Abstract:
The research work investigates the variation of tensile properties for the sansevieria ehrenbergii fiber (SEF) and SEF reinforced polyester composites respect to various water absorption time. The experiments were conducted according to ATSM D3379-75 and ASTM D570 standards. The percentage of water absorption for composite specimens was measured according to ASTM D570 standard. The fiber of SE was cut in to 30 mm length for preparation of the composites. The simple hand lay-up method followed by compression moulding process adopted to prepare the randomly oriented SEF reinforced polyester composites at constant fiber weight fraction of 40%. The surface treatment was done on the SEFs with various chemicals such as NaOH, KMnO4, Benzoyl Peroxide, Benzoyl Chloride and Stearic Acid before preparing the composites. NaOH was used for pre-treatment of all other chemical treatments. The morphology of the tensile fractured specimens studied using the Scanning Electron Microscopic. The tensile strength of the SEF and SEF reinforced polymer composites were carried out with various water absorption time such as 4, 8, 12, 16, 20 and 24 hours respectively. The result shows that the tensile strength was drop off with increase in water absorption time for all composites. The highest tensile property of raw fiber was found due to lowest moistures content. Also the chemical bond between the cellulose and cementic materials such as lignin and wax was highest due to lowest moisture content. Tensile load was lowest and elongation was highest for the water absorbed fibers at various water absorption time ranges. During this process, the fiber cellulose inhales the water and expands the primary and secondary fibers walls. This increases the moisture content in the fibers. Ultimately this increases the hydrogen cation and the hydroxide anion from the water. In tensile testing, the water absorbed fibers shows highest elongation by stretching of expanded cellulose walls and the bonding strength between the fiber cellulose is low. The load carrying capability was stable at 20 hours of water absorption time. This could be directly affecting the interfacial bonding between the fiber/matrix and composite strength. The chemically treated fibers carry higher load and lower elongation which is due to removal of lignin, hemicellulose and wax content. The water time absorption decreases the tensile strength of the composites. The chemically SEF reinforced composites shows highest tensile strength compared to untreated SEF reinforced composites. This was due to highest bonding area between the fiber/matrix. This was proven in the morphology at the fracture zone of the composites. The intra-fiber debonding was occurred by water capsulation in the fiber cellulose. Among all, the tensile strength was found to be highest for KMnO4 treated SEF reinforced composite compared to other composites. This was due to better interfacial bonding between the fiber-matrix compared to other treated fiber composites. The percentage of water absorption of composites increased with time of water absorption. The percentage weight gain of chemically treated SEF composites at 4 hours to zero water absorption are 9, 9, 10, 10.8 and 9.5 for NaOH, BP, BC, KMnO4 and SA respectively. The percentage weight gain of chemically treated SEF composites at 24 hours to zero water absorption 5.2, 7.3, 12.5, 16.7 and 13.5 for NaOH, BP, BC, KMnO4 and SA respectively. Hence the lowest weight gain was found for KMnO4 treated SEF composites by highest percentage with lowest water uptake. However the chemically treated SEF reinforced composites is possible materials for automotive application like body panels, bumpers and interior parts, and household application like tables and racks etc.Keywords: fibres, polymer-matrix composites (PMCs), mechanical properties, scanning electron microscopy (SEM)
Procedia PDF Downloads 410209 An Evaluation of a Prototype System for Harvesting Energy from Pressurized Pipeline Networks
Authors: Nicholas Aerne, John P. Parmigiani
Abstract:
There is an increasing desire for renewable and sustainable energy sources to replace fossil fuels. This desire is the result of several factors. First, is the role of fossil fuels in climate change. Scientific data clearly shows that global warming is occurring. It has also been concluded that it is highly likely human activity; specifically, the combustion of fossil fuels, is a major cause of this warming. Second, despite the current surplus of petroleum, fossil fuels are a finite resource and will eventually become scarce and alternatives, such as clean or renewable energy will be needed. Third, operations to obtain fossil fuels such as fracking, off-shore oil drilling, and strip mining are expensive and harmful to the environment. Given these environmental impacts, there is a need to replace fossil fuels with renewable energy sources as a primary energy source. Various sources of renewable energy exist. Many familiar sources obtain renewable energy from the sun and natural environments of the earth. Common examples include solar, hydropower, geothermal heat, ocean waves and tides, and wind energy. Often obtaining significant energy from these sources requires physically-large, sophisticated, and expensive equipment (e.g., wind turbines, dams, solar panels, etc.). Other sources of renewable energy are from the man-made environment. An example is municipal water distribution systems. The movement of water through the pipelines of these systems typically requires the reduction of hydraulic pressure through the use of pressure reducing valves. These valves are needed to reduce upstream supply-line pressures to levels suitable downstream users. The energy associated with this reduction of pressure is significant but is currently not harvested and is simply lost. While the integrity of municipal water supplies is of paramount importance, one can certainly envision means by which this lost energy source could be safely accessed. This paper provides a technical description and analysis of one such means by the technology company InPipe Energy to generate hydroelectricity by harvesting energy from municipal water distribution pressure reducing valve stations. Specifically, InPipe Energy proposes to install hydropower turbines in parallel with existing pressure reducing valves in municipal water distribution systems. InPipe Energy in partnership with Oregon State University has evaluated this approach and built a prototype system at the O. H. Hinsdale Wave Research Lab. The Oregon State University evaluation showed that the prototype system rapidly and safely initiates, maintains, and ceases power production as directed. The outgoing water pressure remained constant at the specified set point throughout all testing. The system replicates the functionality of the pressure reducing valve and ensures accurate control of down-stream pressure. At a typical water-distribution-system pressure drop of 60 psi the prototype, operating at an efficiency 64%, produced approximately 5 kW of electricity. Based on the results of this study, this proposed method appears to offer a viable means of producing significant amounts of clean renewable energy from existing pressure reducing valves.Keywords: pressure reducing valve, renewable energy, sustainable energy, water supply
Procedia PDF Downloads 204208 Development and Characterization of Castor Oil-Based Biopolyurethanes for High-Performance Coatings and Waterproofing Applications
Authors: Julie Anne Braun, Leonardo D. da Fonseca, Gerson C. Parreira, Ricardo J. E. Andrade
Abstract:
Polyurethanes (PU) are multifunctional polymers used across various industries. In construction, thermosetting polyurethanes are applied as coatings for flooring, paints, and waterproofing. They are widely specified in Brazil for waterproofing concrete structures like roof slabs and parking decks. Applied to concrete, they form a fully adhered membrane, providing a protective barrier with low water absorption, high chemical resistance, impermeability to liquids, and low vapor permeability. Their mechanical properties, including tensile strength (1 to 35 MPa) and Shore A hardness (83 to 88), depend on resin molecular weight and functionality, often using Methylene diphenyl diisocyanate. PU production, reliant on fossil-derived isocyanates and polyols, contributes significantly to carbon emissions. Sustainable alternatives, such as biopolyurethanes from renewable sources, are needed. Castor oil is a viable option for synthesizing sustainable polyurethanes. As a bio-based feedstock, castor oil is extensively cultivated in Brazil, making it a feasible option for the national market and ranking third internationally. This study aims to develop and characterize castor oil-based biopolyurethane for high-performance waterproofing and coating applications. A comparative analysis between castor oil-based PU and polyether polyol-based PU was conducted. Mechanical tests (tensile strength, Shore A hardness, abrasion resistance) and surface properties (contact angle, water absorption) were evaluated. Thermal, chemical, and morphological properties were assessed using thermogravimetric analysis (TGA), Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM). The results demonstrated that both polyurethanes exhibited high mechanical strength. Specifically, the tensile strength for castor oil-based PU was 19.18 MPa, compared to 12.94 MPa for polyether polyol-based PU. Similarly, the elongation values were 146.90% for castor oil-based PU and 135.50% for polyether polyol-based PU. Both materials exhibited satisfactory performance in terms of abrasion resistance, with mass loss of 0.067% for castor oil PU and 0.043% for polyether polyol PU and Shore A hardness values of 89 and 86, respectively, indicating high surface hardness. The results of the water absorption and contact angle tests confirmed the hydrophilic nature of polyether polyol PU, with a contact angle of 58.73° and water absorption of 2.53%. Conversely, the castor oil-based PU exhibited hydrophobic properties, with a contact angle of 81.05° and water absorption of 0.45%. The results of the FTIR analysis indicated the absence of a peak around 2275 cm-1, which suggests that all of the NCO groups were consumed in the stoichiometric reaction. This conclusion is supported by the high mechanical test results. The TGA results indicated that polyether polyol PU demonstrated superior thermal stability, exhibiting a mass loss of 13% at the initial transition (around 310°C), in comparison to castor oil-based PU, which experienced a higher initial mass loss of 25% at 335°C. In summary, castor oil-based PU demonstrated mechanical properties comparable to polyether polyol PU, making it suitable for applications such as trafficable coatings. However, its higher hydrophobicity makes it more promising for watertightness. Increasing environmental concerns necessitate reducing reliance on non-renewable resources and mitigating the environmental impacts of polyurethane production. Castor oil is a viable option for sustainable polyurethanes, aligning with emission reduction goals and responsible use of natural resources.Keywords: polyurethane, castor oil, sustainable, waterproofing, construction industry
Procedia PDF Downloads 41207 Quantified Metabolomics for the Determination of Phenotypes and Biomarkers across Species in Health and Disease
Authors: Miroslava Cuperlovic-Culf, Lipu Wang, Ketty Boyle, Nadine Makley, Ian Burton, Anissa Belkaid, Mohamed Touaibia, Marc E. Surrette
Abstract:
Metabolic changes are one of the major factors in the development of a variety of diseases in various species. Metabolism of agricultural plants is altered the following infection with pathogens sometimes contributing to resistance. At the same time, pathogens use metabolites for infection and progression. In humans, metabolism is a hallmark of cancer development for example. Quantified metabolomics data combined with other omics or clinical data and analyzed using various unsupervised and supervised methods can lead to better diagnosis and prognosis. It can also provide information about resistance as well as contribute knowledge of compounds significant for disease progression or prevention. In this work, different methods for metabolomics quantification and analysis from Nuclear Magnetic Resonance (NMR) measurements that are used for investigation of disease development in wheat and human cells will be presented. One-dimensional 1H NMR spectra are used extensively for metabolic profiling due to their high reliability, wide range of applicability, speed, trivial sample preparation and low cost. This presentation will describe a new method for metabolite quantification from NMR data that combines alignment of spectra of standards to sample spectra followed by multivariate linear regression optimization of spectra of assigned metabolites to samples’ spectra. Several different alignment methods were tested and multivariate linear regression result has been compared with other quantification methods. Quantified metabolomics data can be analyzed in the variety of ways and we will present different clustering methods used for phenotype determination, network analysis providing knowledge about the relationships between metabolites through metabolic network as well as biomarker selection providing novel markers. These analysis methods have been utilized for the investigation of fusarium head blight resistance in wheat cultivars as well as analysis of the effect of estrogen receptor and carbonic anhydrase activation and inhibition on breast cancer cell metabolism. Metabolic changes in spikelet’s of wheat cultivars FL62R1, Stettler, MuchMore and Sumai3 following fusarium graminearum infection were explored. Extensive 1D 1H and 2D NMR measurements provided information for detailed metabolite assignment and quantification leading to possible metabolic markers discriminating resistance level in wheat subtypes. Quantification data is compared to results obtained using other published methods. Fusarium infection induced metabolic changes in different wheat varieties are discussed in the context of metabolic network and resistance. Quantitative metabolomics has been used for the investigation of the effect of targeted enzyme inhibition in cancer. In this work, the effect of 17 β -estradiol and ferulic acid on metabolism of ER+ breast cancer cells has been compared to their effect on ER- control cells. The effect of the inhibitors of carbonic anhydrase on the observed metabolic changes resulting from ER activation has also been determined. Metabolic profiles were studied using 1D and 2D metabolomic NMR experiments, combined with the identification and quantification of metabolites, and the annotation of the results is provided in the context of biochemical pathways.Keywords: metabolic biomarkers, metabolic network, metabolomics, multivariate linear regression, NMR quantification, quantified metabolomics, spectral alignment
Procedia PDF Downloads 338206 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs
Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova
Abstract:
Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).Keywords: woody, vegetation, repeated, photographs
Procedia PDF Downloads 89205 Iran’s Sexual and Reproductive Rights Roll-Back: An Overview of Iran’s New Population Policies
Authors: Raha Bahreini
Abstract:
This paper discusses the roll-back of women’s sexual and reproductive rights in the Islamic Republic of Iran, which has come in the wake of a striking shift in the country’s official population policies. Since the late 1980s, Iran has won worldwide praise for its sexual and reproductive health and services, which have contributed to a steady decline in the country’s fertility rate–from 7.0 births per women in 1980 to 5.5 in 1988, 2.8 in 1996 and 1.85 in 2014. This is owed to a significant increase in the voluntary use of modern contraception in both rural and urban areas. In 1976, only 37 per cent of women were using at least one method of contraception; by 2014 this figure had reportedly risen to a high of nearly 79 per cent for married girls and women living in urban areas and 73.78 per cent for those living in rural areas. Such progress may soon be halted. In July 2012, Iran’s Supreme Leader Ayatollah Sayed Ali Khamenei denounced Iran’s family planning policies as an imitation of Western lifestyle. He exhorted the authorities to increase Iran’s population to 150 to 200 million (from around 78.5 million), including by cutting subsidies for contraceptive methods and dismantling the state’s Family and Population Planning Programme. Shortly thereafter, Iran’s Minister of Health and Medical Education announced the scrapping of the budget for the state-funded Family and Population Planning Programme. Iran’s Parliament subsequently introduced two bills; the Comprehensive Population and Exaltation of Family Bill (Bill 315), and the Bill to Increase Fertility Rates and Prevent Population Decline (Bill 446). Bill 446 outlaws voluntary tubectomies, which are believed to be the second most common method of modern contraception in Iran, and blocks access to information about contraception, denying women the opportunity to make informed decisions about the number and spacing of their children. Coupled with the elimination of state funding for Iran’s Family and Population Programme, the move would undoubtedly result in greater numbers of unwanted pregnancies, forcing more women to seek illegal and unsafe abortions. Bill 315 proposes various discriminatory measures in the areas of employment, divorce, and protection from domestic violence in order to promote a culture wherein wifedom and child-bearing is seen as women’s primary duty. The Bill, for example, instructs private and public entities to prioritize, in sequence, men with children, married men without children and married women with children when hiring for certain jobs. It also bans the recruitment of single individuals as family law lawyers, public and private school teachers and members of the academic boards of universities and higher education institutes. The paper discusses the consequences of these initiatives which would, if continued, set the human rights of women and girls in Iran back by decades, leaving them with a future shaped by increased inequality, discrimination, poor health, limited choices and restricted freedoms, in breach of Iran’s international human rights obligations.Keywords: family planning and reproductive health, gender equality and empowerment of women, human rights, population growth
Procedia PDF Downloads 307204 Lessons Learnt from Industry: Achieving Net Gain Outcomes for Biodiversity
Authors: Julia Baker
Abstract:
Development plays a major role in stopping biodiversity loss. But the ‘silo species’ protection of legislation (where certain species are protected while many are not) means that development can be ‘legally compliant’ and result in biodiversity loss. ‘Net Gain’ (NG) policies can help overcome this by making it an absolute requirement that development causes no overall loss of biodiversity and brings a benefit. However, offsetting biodiversity losses in one location with gains elsewhere is controversial because people suspect ‘offsetting’ to be an easy way for developers to buy their way out of conservation requirements. Yet the good practice principles (GPP) of offsetting provide several advantages over existing legislation for protecting biodiversity from development. This presentation describes the learning from implementing NG approaches based on GPP. It regards major upgrades of the UK’s transport networks, which involved removing vegetation in order to construct and safely operate new infrastructure. While low-lying habitats were retained, trees and other habitats disrupting the running or safety of transport networks could not. Consequently, achieving NG within the transport corridor was not possible and offsetting was required. The first ‘lessons learnt’ were on obtaining a commitment from business leaders to go beyond legislative requirements and deliver NG, and on the institutional change necessary to embed GPP within daily operations. These issues can only be addressed when the challenges that biodiversity poses for business are overcome. These challenges included: biodiversity cannot be measured easily unlike other sustainability factors like carbon and water that have metrics for target-setting and measuring progress; and, the mindset that biodiversity costs money and does not generate cash in return, which is the opposite of carbon or waste for example, where people can see how ‘sustainability’ actions save money. The challenges were overcome by presenting the GPP of NG as a cost-efficient solution to specific, critical risks facing the business that also boost industry recognition, and by using government-issued NG metrics to develop business-specific toolkits charting their NG progress whilst ensuring that NG decision-making was based on rich ecological data. An institutional change was best achieved by supporting, mentoring and training sustainability/environmental managers for these ‘frontline’ staff to embed GPP within the business. The second learning was from implementing the GPP where business partnered with local governments, wildlife groups and land owners to support their priorities for nature conservation, and where these partners had a say in decisions about where and how best to achieve NG. From this inclusive approach, offsetting contributed towards conservation priorities when all collaborated to manage trade-offs between: -Delivering ecologically equivalent offsets or compensating for losses of one type of biodiversity by providing another. -Achieving NG locally to the development whilst contributing towards national conservation priorities through landscape-level planning. -Not just protecting the extent and condition of existing biodiversity but ‘doing more’. -The multi-sector collaborations identified practical, workable solutions to ‘in perpetuity’. But key was strengthening linkages between biodiversity measures implemented for development and conservation work undertaken by local organizations so that developers support NG initiatives that really count.Keywords: biodiversity offsetting, development, nature conservation planning, net gain
Procedia PDF Downloads 195203 Managing Inter-Organizational Innovation Project: Systematic Review of Literature
Authors: Lamin B Ceesay, Cecilia Rossignoli
Abstract:
Inter-organizational collaboration is a growing phenomenon in both research and practice. The partnership between organizations enables firms to leverage external resources, experiences, and technology that lie with other firms. This collaborative practice is a source of improved business model performance, technological advancement, and increased competitive advantage for firms. However, the competitive intents, and even diverse institutional logics of firms, make inter-firm innovation-based partnership even more complex, and its governance more challenging. The purpose of this paper is to present a systematic review of research linking the inter-organizational relationship of firms with their innovation practice and specify the different project management issues and gaps addressed in previous research. To do this, we employed a systematic review of the literature on inter-organizational innovation using two complementary scholarly databases - ScienceDirect and Web of Science (WoS). Article scoping relies on the combination of keywords based on similar terms used in the literature:(1) inter-organizational relationship, (2) business network, (3) inter-firm project, and (4) innovation network. These searches were conducted in the title, abstract, and keywords of conceptual and empirical research papers done in English. Our search covers between 2010 to 2019. We applied several exclusion criteria including Papers published outside the years under the review, papers in a language other than English, papers neither listed in WoS nor ScienceDirect and papers that are not sharply related to the inter-organizational innovation-based partnership were removed. After all relevant search criteria were applied, a final list of 84 papers constitutes the data for this review. Our review revealed an increasing evolution of inter-organizational relationship research during the period under the review. The descriptive analysis of papers according to Journal outlets finds that International Journal of Project Management (IJPM), Journal of Industrial Marketing, Journal of Business Research (JBR), etc. are the leading journal outlets for research in the inter-organizational innovation project. The review also finds that Qualitative methods and quantitative approaches respectively are the leading research methods adopted by scholars in the field. However, literature review and conceptual papers constitute the least in the field. During the content analysis of the selected papers, we read the content of each paper and found that the selected papers try to address one of the three phenomena in inter-organizational innovation research: (1) project antecedents; (2) project management and (3) project performance outcomes. We found that these categories are not mutually exclusive, but rather interdependent. This categorization also helped us to organize the fragmented literature in the field. While a significant percentage of the literature discussed project management issues, we found fewer extant literature on project antecedents and performance. As a result of this, we organized the future research agenda addressed in several papers by linking them with the under-researched themes in the field, thus providing great potential to advance future research agenda especially, in the under-researched themes in the field. Finally, our paper reveals that research on inter-organizational innovation project is generally fragmented which hinders a better understanding of the field. Thus, this paper contributes to the understanding of the field by organizing and discussing the extant literature to advance the theory and application of inter-organizational relationship.Keywords: inter-organizational relationship, inter-firm collaboration, innovation projects, project management, systematic review
Procedia PDF Downloads 113202 Potential Benefits and Adaptation of Climate Smart Practices by Small Farmers Under Three-Crop Rice Production System in Vietnam
Authors: Azeem Tariq, Stephane De Tourdonnet, Lars Stoumann Jensen, Reiner Wassmann, Bjoern Ole Sander, Quynh Duong Vu, Trinh Van Mai, Andreas De Neergaard
Abstract:
Rice growing area is increasing to meet the food demand of increasing population. Mostly, rice is growing on lowland, small landholder fields in most part of the world, which is one of the major sources of greenhouse gases (GHG) emissions from agriculture fields. The strategies such as, altering water and residues (carbon) management practices are assumed to be essential to mitigate the GHG emissions from flooded rice system. The actual implementation and potential of these measures on small farmer fields is still challenging. A field study was conducted on red river delta in Northern Vietnam to identify the potential challenges and barriers to the small rice farmers for implementation of climate smart rice practices. The objective of this study was to develop and access the feasibility of climate smart rice prototypes under actual farmer conditions. Field and scientific oriented framework was used to meet our objective. The methodological framework composed of six steps: i) identification of stakeholders and possible options, ii) assessment of barrios, drawbacks/advantages of new technologies, iii) prototype design, iv) assessment of mitigation potential of each prototype, v) scenario building and vi) scenario assessment. A farm survey was conducted to identify the existing farm practices and major constraints of small rice farmers. We proposed the two water (pre transplant+midseason drainage and early+midseason drainage) and one straw (full residue incorporation) management option keeping in views the farmers constraints and barriers for implementation. To test new typologies with existing prototypes (midseason drainage, partial residue incorporation) at farmer local conditions, a participatory field experiment was conducted for two consecutive rice seasons at farmer fields. Following the results of each season a workshop was conducted with stakeholders (farmers, village leaders, cooperatives, irrigation staff, extensionists, agricultural officers) at local and district level to get feedbacks on new tested prototypes and to develop possible scenarios for climate smart rice production practices. The farm analysis survey showed that non-availability of cheap labor and lacks of alternatives for straw management influence the small farmers to burn the residues in the fields except to use for composting or other purposes. Our field results revealed that application of early season drainage significantly mitigates (40-60%) the methane emissions from residue incorporation. Early season drainage was more efficient and easy to control under cooperate manage system than individually managed water system, and it leads to both economic (9-11% high rice yield, low cost of production, reduced nutrient loses) and environmental (mitigate methane emissions) benefits. The participatory field study allows the assessment of adaptation potential and possible benefits of climate smart practices on small farmer fields. If farmers have no other residue management option, full residue incorporation with early plus midseason drainage is adaptable and beneficial (both environmentally and economically) management option for small rice farmers.Keywords: adaptation, climate smart agriculture, constrainsts, smallholders
Procedia PDF Downloads 266201 Achieving the Status of Total Sanitation in the Rural Nepalese Context: A Case Study from Amarapuri, Nepal
Authors: Ram Chandra Sah
Abstract:
Few years back, naturally a very beautiful country Nepal was facing a lot of problems related to the practice of open defecation (having no toilet) by almost 98% people of the country. Now, the scenario is changed. Government of Nepal set the target of achieving the situation of basic level sanitation (toilets) facilities by 2017 AD for which the Sanitation and Hygiene Master Plan (SHMP) was brought in 2011 AD with the major beauty as institutional set up formation, local formal authority leadership, locally formulated strategic plan; partnership, harmonized and coordinated approach to working; no subsidy or support at a blanket level, community and local institutions or organizations mobilization approaches. Now, the Open Defecation Free (ODF) movement in the country is at a full swing. The Sanitation and Hygiene Master Plan (SHMP) has clearly defined Total Sanitation which is accepted to be achieved if all the households of the related boundary have achieved the 6 indicators such as the access and regular use of toilet(s), regular use of soap and water at the critical moments, regular practice of use of food hygiene behavior, regular practice of use of water hygiene behavior including household level purification of locally available drinking water, maintenance of regular personal hygiene with household level waste management and the availability of the state of overall clean environment at the concerned level of boundary. Nepal has 3158 Village Development Committees (VDC's) in the rural areas. Amarapuri VDC was selected for the purpose of achieving Total Sanitation. Based on the SHMP; different methodologies such as updating of Village Water Sanitation and Hygiene Coordination Committee (V-WASH-CC), Total Sanitation team formation including one volunteer for each indicator, campaigning through settlement meetings, midterm evaluation which revealed the need of ward level 45 (5 for all 9 wards) additional volunteers, ward wise awareness creation with the help of the volunteers, informative notice boards and hoarding boards with related messages at important locations, management of separate waste disposal rings for decomposable and non-decomposable wastes, related messages dissemination through different types of local cultural programs, public toilets construction and management by community level; mobilization of local schools, offices and health posts; reward and recognition to contributors etc. were adopted for achieving 100 % coverage of each indicator. The VDC was in a very worse situation in 2010 with just 50, 30, 60, 60, 40, 30 percent coverage of the respective indicators and became the first VDC of the country declared with Total Sanitation. The expected result of 100 percent coverage of all the indicators was achieved in 2 years 10 months and 19 days. Experiences of Amarapuri were replicated successfully in different parts of the country and many VDC's have been declared with the achievement of Total Sanitation. Thus, Community Mobilized Total Sanitation Movement in Nepal has supported a lot for achieving a Total Sanitation situation of the country with a minimal cost and it is believed that the approach can be very useful for other developing or under developed countries of the world.Keywords: community mobilized, open defecation free, sanitation and hygiene master plan, total sanitation
Procedia PDF Downloads 199200 Understanding the Impact of Spatial Light Distribution on Object Identification in Low Vision: A Pilot Psychophysical Study
Authors: Alexandre Faure, Yoko Mizokami, éRic Dinet
Abstract:
These recent years, the potential of light in assisting visually impaired people in their indoor mobility has been demonstrated by different studies. Implementing smart lighting systems for selective visual enhancement, especially designed for low-vision people, is an approach that breaks with the existing visual aids. The appearance of the surface of an object is significantly influenced by the lighting conditions and the constituent materials of the objects. Appearance of objects may appear to be different from expectation. Therefore, lighting conditions lead to an important part of accurate material recognition. The main objective of this work was to investigate the effect of the spatial distribution of light on object identification in the context of low vision. The purpose was to determine whether and what specific lighting approaches should be preferred for visually impaired people. A psychophysical experiment was designed to study the ability of individuals to identify the smallest cube of a pair under different lighting diffusion conditions. Participants were divided into two distinct groups: a reference group of observers with normal or corrected-to-normal visual acuity and a test group, in which observers were required to wear visual impairment simulation glasses. All participants were presented with pairs of cubes in a "miniature room" and were instructed to estimate the relative size of the two cubes. The miniature room replicates real-life settings, adorned with decorations and separated from external light sources by black curtains. The correlated color temperature was set to 6000 K, and the horizontal illuminance at the object level at approximately 240 lux. The objects presented for comparison consisted of 11 white cubes and 11 black cubes of different sizes manufactured with a 3D printer. Participants were seated 60 cm away from the objects. Two different levels of light diffuseness were implemented. After receiving instructions, participants were asked to judge whether the two presented cubes were the same size or if one was smaller. They provided one of five possible answers: "Left one is smaller," "Left one is smaller but unsure," "Same size," "Right one is smaller," or "Right one is smaller but unsure.". The method of constant stimuli was used, presenting stimulus pairs in a random order to prevent learning and expectation biases. Each pair consisted of a comparison stimulus and a reference cube. A psychometric function was constructed to link stimulus value with the frequency of correct detection, aiming to determine the 50% correct detection threshold. Collected data were analyzed through graphs illustrating participants' responses to stimuli, with accuracy increasing as the size difference between cubes grew. Statistical analyses, including 2-way ANOVA tests, showed that light diffuseness had no significant impact on the difference threshold, whereas object color had a significant influence in low vision scenarios. The first results and trends derived from this pilot experiment clearly and strongly suggest that future investigations could explore extreme diffusion conditions to comprehensively assess the impact of diffusion on object identification. For example, the first findings related to light diffuseness may be attributed to the range of manipulation, emphasizing the need to explore how other lighting-related factors interact with diffuseness.Keywords: Lighting, Low Vision, Visual Aid, Object Identification, Psychophysical Experiment
Procedia PDF Downloads 64199 Large Scale Method to Assess the Seismic Vulnerability of Heritage Buidings: Modal Updating of Numerical Models and Vulnerability Curves
Authors: Claire Limoge Schraen, Philippe Gueguen, Cedric Giry, Cedric Desprez, Frédéric Ragueneau
Abstract:
Mediterranean area is characterized by numerous monumental or vernacular masonry structures illustrating old ways of build and live. Those precious buildings are often poorly documented, present complex shapes and loadings, and are protected by the States, leading to legal constraints. This area also presents a moderate to high seismic activity. Even moderate earthquakes can be magnified by local site effects and cause collapse or significant damage. Moreover the structural resistance of masonry buildings, especially when less famous or located in rural zones has been generally lowered by many factors: poor maintenance, unsuitable restoration, ambient pollution, previous earthquakes. Recent earthquakes prove that any damage to these architectural witnesses to our past is irreversible, leading to the necessity of acting preventively. This means providing preventive assessments for hundreds of structures with no or few documents. In this context we want to propose a general method, based on hierarchized numerical models, to provide preliminary structural diagnoses at a regional scale, indicating whether more precise investigations and models are necessary for each building. To this aim, we adapt different tools, being developed such as photogrammetry or to be created such as a preprocessor starting from pictures to build meshes for a FEM software, in order to allow dynamic studies of the buildings of the panel. We made an inventory of 198 baroque chapels and churches situated in the French Alps. Then their structural characteristics have been determined thanks field surveys and the MicMac photogrammetric software. Using structural criteria, we determined eight types of churches and seven types for chapels. We studied their dynamical behavior thanks to CAST3M, using EC8 spectrum and accelerogramms of the studied zone. This allowed us quantifying the effect of the needed simplifications in the most sensitive zones and choosing the most effective ones. We also proposed threshold criteria based on the observed damages visible in the in situ surveys, old pictures and Italian code. They are relevant in linear models. To validate the structural types, we made a vibratory measures campaign using vibratory ambient noise and velocimeters. It also allowed us validating this method on old masonry and identifying the modal characteristics of 20 churches. Then we proceeded to a dynamic identification between numerical and experimental modes. So we updated the linear models thanks to material and geometrical parameters, often unknown because of the complexity of the structures and materials. The numerically optimized values have been verified thanks to the measures we made on the masonry components in situ and in laboratory. We are now working on non-linear models redistributing the strains. So we validate the damage threshold criteria which we use to compute the vulnerability curves of each defined structural type. Our actual results show a good correlation between experimental and numerical data, validating the final modeling simplifications and the global method. We now plan to use non-linear analysis in the critical zones in order to test reinforcement solutions.Keywords: heritage structures, masonry numerical modeling, seismic vulnerability assessment, vibratory measure
Procedia PDF Downloads 492198 Remote BioMonitoring of Mothers and Newborns for Temperature Surveillance Using a Smart Wearable Sensor: Techno-Feasibility Study and Clinical Trial in Southern India
Authors: Prem K. Mony, Bharadwaj Amrutur, Prashanth Thankachan, Swarnarekha Bhat, Suman Rao, Maryann Washington, Annamma Thomas, N. Sheela, Hiteshwar Rao, Sumi Antony
Abstract:
The disease burden among mothers and newborns is caused mostly by a handful of avoidable conditions occurring around the time of childbirth and within the first month following delivery. Real-time monitoring of vital parameters of mothers and neonates offers a potential opportunity to impact access as well as the quality of care in vulnerable populations. We describe the design, development and testing of an innovative wearable device for remote biomonitoring (RBM) of body temperatures in mothers and neonates in a hospital in southern India. The architecture consists of: [1] a low-cost, wearable sensor tag; [2] a gateway device for ‘real-time’ communication link; [3] piggy-backing on a commercial GSM communication network; and [4] an algorithm-based data analytics system. Requirements for the device were: long battery-life upto 28 days (with sampling frequency 5/hr); robustness; IP 68 hermetic sealing; and human-centric design. We undertook pre-clinical laboratory testing followed by clinical trial phases I & IIa for evaluation of safety and efficacy in the following sequence: seven healthy adult volunteers; 18 healthy mothers; and three sets of babies – 3 healthy babies; 10 stable babies in the Neonatal Intensive Care Unit (NICU) and 1 baby with hypoxic ischaemic encephalopathy (HIE). The 3-coin thickness, pebble-design sensor weighing about 8 gms was secured onto the abdomen for the baby and over the upper arm for adults. In the laboratory setting, the response-time of the sensor device to attain thermal equilibrium with the surroundings was 4 minutes vis-a-vis 3 minutes observed with a precision-grade digital thermometer used as a reference standard. The accuracy was ±0.1°C of the reference standard within the temperature range of 25-40°C. The adult volunteers, aged 20 to 45 years, contributed a total of 345 hours of readings over a 7-day period and the postnatal mothers provided a total of 403 paired readings. The mean skin temperatures measured in the adults by the sensor were about 2°C lower than the axillary temperature readings (sensor =34.1 vs digital = 36.1); this difference was statistically significant (t-test=13.8; p<0.001). The healthy neonates provided a total of 39 paired readings; the mean difference in temperature was 0.13°C (sensor =36.9 vs digital = 36.7; p=0.2). The neonates in the NICU provided a total of 130 paired readings. Their mean skin temperature measured by the sensor was 0.6°C lower than that measured by the radiant warmer probe (sensor =35.9 vs warmer probe = 36.5; p < 0.001). The neonate with HIE provided a total of 25 paired readings with the mean sensor reading being not different from the radian warmer probe reading (sensor =33.5 vs warmer probe = 33.5; p=0.8). No major adverse events were noted in both the adults and neonates; four adult volunteers reported mild sweating under the device/arm band and one volunteer developed mild skin allergy. This proof-of-concept study shows that real-time monitoring of temperatures is technically feasible and that this innovation appears to be promising in terms of both safety and accuracy (with appropriate calibration) for improved maternal and neonatal health.Keywords: public health, remote biomonitoring, temperature surveillance, wearable sensors, mothers and newborns
Procedia PDF Downloads 208197 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System
Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski
Abstract:
Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals
Procedia PDF Downloads 377196 On the Bias and Predictability of Asylum Cases
Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats
Abstract:
An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.Keywords: asylum adjudications, automated decision-making, machine learning, text mining
Procedia PDF Downloads 95195 Guiding Urban Development in a Traditional Neighbourhood: Case Application of Kolkata
Authors: Nabamita Nath, Sanghamitra Sarkar
Abstract:
Urban development in traditional neighbourhoods of cities is undergoing a sea change due to imposition of irregular development patterns on a predominantly inclusive urban fabric. In recent times, traditional neighbourhoods of Kolkata have experienced irregular urban development which has resulted in transformation of its immediate urban character. The goal is to study and analyse impact of new urban developments within traditional neighbourhoods of Kolkata and establish development guidelines to balance the old with the new. Various cities predominantly in third world countries are also experiencing similar development patterns in their traditional neighbourhoods. Existing literature surveys of development patterns in such neighbourhoods have established 9 major parameters viz. edge, movement, node, landmark, size-density, pattern-grain-texture, open spaces, urban spaces, urban form and views-vistas of the neighbourhood. To evaluate impact of urban development in traditional neighbourhoods of Kolkata, 3 different areas have been chronologically selected based on their settlement patterns. Parameters established through literature surveys have been applied to the selected areas to study and analyse the existing patterns of development. The main sources of this study included extensive on-site surveys, academic archive, census data, organisational records and informational websites. Applying the established parameters, 5 major conclusions were derived. Firstly, it was found that pedestrian friendly neighbourhoods of the city were becoming more car-centric. This has resulted in loss of interactive and social spaces which defined the cultural heritage of Kolkata. Secondly, the urban pattern which was composed of dense and compact fabric is gradually losing its character due to incorporation of new building typologies. Thirdly, the new building typologies include gated communities with private open spaces which is a stark departure from the existing built typology. However, these open spaces have not contributed in creation of inclusive public places for the community which are a significant part of such heritage neighbourhood precincts. Fourthly, commercial zones that primarily developed along major access routes have now infiltrated within these neighbourhoods. Gated communities do not favour formation of on-street commercial activities generating haphazard development patterns. Lastly, individual residential buildings that reflected Indo-saracenic and Neo-gothic architectural styles are converting into multi-storeyed residential apartments. As a result, the axis that created a definite visual identity for a neighbourhood is progressively following an irregular pattern. Thus, uniformity of the old skyline is gradually becoming inconsistent. The major issue currently is threat caused by irregular urban development to heritage zones and buildings of traditional neighbourhoods. Streets, lanes, courtyards, open spaces and buildings of old neighbourhoods imparted a unique cultural identity to the city that is disappearing with emerging urban development patterns. It has been concluded that specific guidelines for urban development should be regulated primarily based on existing urban form of traditional neighbourhoods. Such neighbourhood development strategies should be formulated for various cities of third world countries to control irregular developments thereby balancing heritage and development.Keywords: heritage, Kolkata, traditional neighbourhood, urban development
Procedia PDF Downloads 179194 Improving Preconception Health and Lifestyle Behaviours through Digital Health Intervention: The OptimalMe Program
Authors: Bonnie R. Brammall, Rhonda M. Garad, Helena J. Teede, Cheryce L. Harrison
Abstract:
Introduction: Reproductive aged women are at high-risk for accelerated weight gain and obesity development, with pregnancy recognised as a critical contributory life phase. Healthy lifestyle interventions during the preconception and antenatal period improve maternal and infant health outcomes. Yet, interventions from preconception through to postpartum and translation and implementation into real-world healthcare settings remain limited. OptimalMe is a randomised, hybrid implementation effectiveness study of evidence-based healthy lifestyle intervention. Here, we report engagement, acceptability of the intervention during preconception, and self-reported behaviour change outcomes as a result of the preconception phase of the intervention. Methods: Reproductive aged women who upgraded their private health insurance to include pregnancy and birth cover, signalling a pregnancy intention, were invited to participate. Women received access to an online portal with preconception health and lifestyle modules, goal-setting and behaviour change tools, monthly SMS messages, and two coaching sessions (randomised to video or phone) prior to pregnancy. Results: Overall n=527 expressed interest in participating. Of these, n=33 did not meet inclusion criteria, n=8 were not contactable for eligibility screening, and n=177 failed to engage after the screening, leaving n=309 who were enrolled in OptimalMe and randomised to intervention delivery method. Engagement with coaching sessions dropped by 25% for session two, with no difference between intervention groups. Women had a mean (SD) age of 31.7 (4.3) years and, at baseline, a self-reported mean BMI of 25.7 (6.1) kg/m², with 55.8% (n=172) of a healthy BMI. Behaviour was sub-optimal with infrequent self-weighing (38.1%), alcohol consumption prevalent (57.1%), sub-optimal pre-pregnancy supplementation (61.5%), and incomplete medical screening. Post-intervention 73.2% of women reported engagement with a GP for preconception care and improved lifestyle behaviour (85.5%), since starting OptimalMe. Direct pre-and-post comparison of individual participant data showed that of 322 points of potential change (up-to-date cervical screening, elimination of high-risk behaviours [alcohol, drugs, smoking], uptake of preconception supplements and improved weighing habits) 158 (49.1%) points of change were achieved. Health coaching sessions were found to improve accountability and confidence, yet further personalisation and support were desired. Engagement with video and phone sessions was comparable, having similar impacts on behaviour change, and both methods were well accepted and increased women's accountability. Conclusion: A low-intensity digital health and lifestyle program with embedded health coaching can improve the uptake of preconception care and lead to self-reported behaviour change. This is the first program of its kind to reach an otherwise healthy population of women planning a pregnancy. Women who were otherwise healthy showed divergence from preconception health and lifestyle objectives and benefited from the intervention. OptimalMe shows promising results for population-based behaviour change interventions that can improve preconception lifestyle habits and increase engagement with clinical health care for pregnancy preparation.Keywords: preconception, pregnancy, preventative health, weight gain prevention, self-management, behaviour change, digital health, telehealth, intervention, women's health
Procedia PDF Downloads 91193 Neural Correlates of Diminished Humor Comprehension in Schizophrenia: A Functional Magnetic Resonance Imaging Study
Authors: Przemysław Adamczyk, Mirosław Wyczesany, Aleksandra Domagalik, Artur Daren, Kamil Cepuch, Piotr Błądziński, Tadeusz Marek, Andrzej Cechnicki
Abstract:
The present study aimed at evaluation of neural correlates of humor comprehension impairments observed in schizophrenia. To investigate the nature of this deficit in schizophrenia and to localize cortical areas involved in humor processing we used functional magnetic resonance imaging (fMRI). The study included chronic schizophrenia outpatients (SCH; n=20), and sex, age and education level matched healthy controls (n=20). The task consisted of 60 stories (setup) of which 20 had funny, 20 nonsensical and 20 neutral (not funny) punchlines. After the punchlines were presented, the participants were asked to indicate whether the story was comprehensible (yes/no) and how funny it was (1-9 Likert-type scale). fMRI was performed on a 3T scanner (Magnetom Skyra, Siemens) using 32-channel head coil. Three contrasts in accordance with the three stages of humor processing were analyzed in both groups: abstract vs neutral stories - incongruity detection; funny vs abstract - incongruity resolution; funny vs neutral - elaboration. Additionally, parametric modulation analysis was performed using both subjective ratings separately in order to further differentiate the areas involved in incongruity resolution processing. Statistical analysis for behavioral data used U Mann-Whitney test and Bonferroni’s correction, fMRI data analysis utilized whole-brain voxel-wise t-tests with 10-voxel extent threshold and with Family Wise Error (FWE) correction at alpha = 0.05, or uncorrected at alpha = 0.001. Between group comparisons revealed that the SCH subjects had attenuated activation in: the right superior temporal gyrus in case of irresolvable incongruity processing of nonsensical puns (nonsensical > neutral); the left medial frontal gyrus in case of incongruity resolution processing of funny puns (funny > nonsensical) and the interhemispheric ACC in case of elaboration of funny puns (funny > neutral). Additionally, the SCH group revealed weaker activation during funniness ratings in the left ventro-medial prefrontal cortex, the medial frontal gyrus, the angular and the supramarginal gyrus, and the right temporal pole. In comprehension ratings the SCH group showed suppressed activity in the left superior and medial frontal gyri. Interestingly, these differences were accompanied by protraction of time in both types of rating responses in the SCH group, a lower level of comprehension for funny punchlines and a higher funniness for absurd punchlines. Presented results indicate that, in comparison to healthy controls, schizophrenia is characterized by difficulties in humor processing revealed by longer reaction times, impairments of understanding jokes and finding nonsensical punchlines more funny. This is accompanied by attenuated brain activations, especially in the left fronto-parietal and the right temporal cortices. Disturbances of the humor processing seem to be impaired at the all three stages of the humor comprehension process, from incongruity detection, through its resolution to elaboration. The neural correlates revealed diminished neural activity of the schizophrenia brain, as compared with the control group. The study was supported by the National Science Centre, Poland (grant no 2014/13/B/HS6/03091).Keywords: communication skills, functional magnetic resonance imaging, humor, schizophrenia
Procedia PDF Downloads 213192 Health Reforms in Central and Eastern European Countries: Results, Dynamics, and Outcomes Measure
Authors: Piotr Romaniuk, Krzysztof Kaczmarek, Adam Szromek
Abstract:
Background: A number of approaches to assess the performance of health system have been proposed so far. Nonetheless, they lack a consensus regarding the key components of assessment procedure and criteria of evaluation. The WHO and OECD have developed methods of assessing health system to counteract the underlying issues, but they are not free of controversies and did not manage to produce a commonly accepted consensus. The aim of the study: On the basis of WHO and OECD approaches we decided to develop own methodology to assess the performance of health systems in Central and Eastern European countries. We have applied the method to compare the effects of health systems reforms in 20 countries of the region, in order to evaluate the dynamic of changes in terms of health system outcomes.Methods: Data was collected from a 25-year time period after the fall of communism, subsetted into different post-reform stages. Datasets collected from individual countries underwent one-, two- or multi-dimensional statistical analyses, and the Synthetic Measure of health system Outcomes (SMO) was calculated, on the basis of the method of zeroed unitarization. A map of dynamics of changes over time across the region was constructed. Results: When making a comparative analysis of the tested group in terms of the average SMO value throughout the analyzed period, we noticed some differences, although the gaps between individual countries were small. The countries with the highest SMO were the Czech Republic, Estonia, Poland, Hungary and Slovenia, while the lowest was in Ukraine, Russia, Moldova, Georgia, Albania, and Armenia. Countries differ in terms of the range of SMO value changes throughout the analyzed period. The dynamics of change is high in the case of Estonia and Latvia, moderate in the case of Poland, Hungary, Czech Republic, Croatia, Russia and Moldova, and small when it comes to Belarus, Ukraine, Macedonia, Lithuania, and Georgia. This information reveals fluctuation dynamics of the measured value in time, yet it does not necessarily mean that in such a dynamic range an improvement appears in a given country. In reality, some of the countries moved from on the scale with different effects. Albania decreased the level of health system outcomes while Armenia and Georgia made progress, but lost distance to leaders in the region. On the other hand, Latvia and Estonia showed the most dynamic progress in improving the outcomes. Conclusions: Countries that have decided to implement comprehensive health reform have achieved a positive result in terms of further improvements in health system efficiency levels. Besides, a higher level of efficiency during the initial transition period generally positively determined the subsequent value of the efficiency index value, but not the dynamics of change. The paths of health system outcomes improvement are highly diverse between different countries. The instrument we propose constitutes a useful tool to evaluate the effectiveness of reform processes in post-communist countries, but more studies are needed to identify factors that may determine results obtained by individual countries, as well as to eliminate the limitations of methodology we applied.Keywords: health system outcomes, health reforms, health system assessment, health system evaluation
Procedia PDF Downloads 290191 Precocious Puberty Due to an Autonomous Ovarian Cyst in a 3-Year-Old Girl: Case Report
Authors: Aleksandra Chałupnik, Zuzanna Chilimoniuk, Joanna Borowik, Aleksandra Borkowska, Anna Torres
Abstract:
Background: Precocious puberty is the occurrence of secondary sexual characteristics in girls before the age of 8. The diverse etiology of premature puberty is crucial to determine whether it is true precocious puberty, depending on the activation of the hypothalamic-pituitary-gonadal axis, or pseudo-precocious, which is independent of the activation of this axis. Whatever the cause, premature action of the sex hormones leads to the common symptoms of various forms of puberty. These include the development of sexual characteristics, acne, acceleration of growth rate and acceleration of skeletal maturation. Due to the possible genetic basis of the disorders, an interdisciplinary search for the cause is needed. Case report: The case report concerns a patient of a pediatric gynecology clinic who, at the age of two years, developed advanced thelarhe (M3) and started recurrent vaginal bleeding. In August 2019, gonadotropin suppression initially and after LHRH stimulation and high estradiol levels were reported at the Endocrinology Department. Imaging examinations showed a cyst in the right ovary projection. The bone age was six years. The entire clinical picture indicated pseudo- (peripheral) precocious in the course of ovarian autonomic cyst. In the follow-up ultrasound performed in September, the image of the cyst was stationary and normalization of estradiol levels and clinical symptoms was noted. In December 2019, cyst regression and normal gonadotropin and estradiol concentrations were found. In June 2020, white mucus tinged with blood on the underwear, without any other disturbing symptoms, was observed for several days. Two consecutive USG examinations carried out in the same month confirmed the change in the right ovary, the diameter of which was 25 mm with a very high level of estradiol. Germinal tumor markers were normal. On the Tanner scale, the patient scored M2P1. The labia and hymen had puberty features. The correct vaginal entrance was visible. Another active vaginal bleeding occurred in the first week of July 2020. The considered laparoscopic treatment was abandoned due to the lack of oncological indications. Treatment with Tamoxifen was recommended in July 2020. In the initiating period of treatment, no maturation progression, and even reduction of symptoms, no acceleration of growth and a marked reduction in the size of the cysts were noted. There was no bleeding. After the size of the cyst and hormonal activity increased again, the treatment was changed to Anastrozole, the effect of which led to a reduction in the size of the cyst. Conclusions: The entire clinical picture indicates alleged (peripheral) puberty. Premature puberty in girls, which is manifested as enlarged mammary glands with high levels of estrogens secreted by autonomic ovarian cysts and prepubertal levels of gonadotropins, may indicate McCune-Albright syndrome. Vaginal bleeding may also occur in this syndrome. Cancellation of surgical treatment of the cyst made it impossible to perform a molecular test that would allow to confirm the diagnosis. Taking into account the fact that cysts are often one of the first symptoms of McCune-Albrigt syndrome, it is important to remember about multidisciplinary care for the patient and careful search for skin and bone changes or other hormonal disorders.Keywords: McCune Albrigth's syndrome, ovarian cyst, pediatric gynaecology, precocious puberty
Procedia PDF Downloads 190190 Detection of Mustard Traces in Food by an Official Food Safety Laboratory
Authors: Clara Tramuta, Lucia Decastelli, Elisa Barcucci, Sandra Fragassi, Samantha Lupi, Enrico Arletti, Melissa Bizzarri, Daniela Manila Bianchi
Abstract:
Introdution: Food allergies occurs, in the Western World, 2% of adults and up to 8% of children. The protection of allergic consumers is guaranted, in Eurrope, by Regulation (EU) No 1169/2011 of the European Parliament which governs the consumer's right to information and identifies 14 food allergens to be mandatory indicated on the label. Among these, mustard is a popular spice added to enhance the flavour and taste of foods. It is frequently present as an ingredient in spice blends, marinades, salad dressings, sausages, and other products. Hypersensitivity to mustard is a public health problem since the ingestion of even low amounts can trigger severe allergic reactions. In order to protect the allergic consumer, high performance methods are required for the detection of allergenic ingredients. Food safety laboratories rely on validated methods that detect hidden allergens in food to ensure the safety and health of allergic consumers. Here we present the test results for the validation and accreditation of a Real time PCR assay (RT-PCR: SPECIALfinder MC Mustard, Generon), for the detection of mustard traces in food. Materials and Methods. The method was tested on five classes of food matrices: bakery and pastry products (chocolate cookies), meats (ragù), ready-to-eat (mixed salad), dairy products (yogurt), grains, and milling products (rice and barley flour). Blank samples were spiked starting with the mustard samples (Sinapis Alba), lyophilized and stored at -18 °C, at a concentration of 1000 ppm. Serial dilutions were then prepared to a final concentration of 0.5 ppm, using the DNA extracted by ION Force FAST (Generon) from the blank samples. The Real Time PCR reaction was performed by RT-PCR SPECIALfinder MC Mustard (Generon), using CFX96 System (BioRad). Results. Real Time PCR showed a limit of detection (LOD) of 0.5 ppm in grains and milling products, ready-to-eat, meats, bakery, pastry products, and dairy products (range Ct 25-34). To determine the exclusivity parameter of the method, the ragù matrix was contaminated with Prunus dulcis (almonds), peanut (Arachis hypogaea), Glycine max (soy), Apium graveolens (celery), Allium cepa (onion), Pisum sativum (peas), Daucus carota (carrots), and Theobroma cacao (cocoa) and no cross-reactions were observed. Discussion. In terms of sensitivity, the Real Time PCR confirmed, even in complex matrix, a LOD of 0.5 ppm in five classes of food matrices tested; these values are compatible with the current regulatory situation that does not consider, at international level, to establish a quantitative criterion for the allergen considered in this study. The Real Time PCR SPECIALfinder kit for the detection of mustard proved to be easy to use and particularly appreciated for the rapid response times considering that the amplification and detection phase has a duration of less than 50 minutes. Method accuracy was rated satisfactory for sensitivity (100%) and specificity (100%) and was fully validated and accreditated. It was found adequate for the needs of the laboratory as it met the purpose for which it was applied. This study was funded in part within a project of the Italian Ministry of Health (IZS PLV 02/19 RC).Keywords: allergens, food, mustard, real time PCR
Procedia PDF Downloads 167189 Municipalities as Enablers of Citizen-Led Urban Initiatives: Possibilities and Constraints
Authors: Rosa Nadine Danenberg
Abstract:
In recent years, bottom-up urban development has started growing as an alternative to conventional top-down planning. In large proportions, citizens and communities initiate small-scale interventions; suddenly seeming to form a trend. As a result, more and more cities are witnessing not only the growth of but also an interest in these initiatives, as they bear the potential to reshape urban spaces. Such alternative city-making efforts cause new dynamics in urban governance, with inevitable consequences for the controlled city planning and its administration. The emergence of enabling relationships between top-down and bottom-up actors signals an increasingly common urban practice. Various case studies show that an enabling relationship is possible, yet, how it can be optimally realized stays rather underexamined. Therefore, the seemingly growing worldwide phenomenon of ‘municipal bottom-up urban development’ necessitates an adequate governance structure. As such, the aim of this research is to contribute knowledge to how municipalities can enable citizen-led urban initiatives from a governance innovation perspective. Empirical case-study research in Stockholm and Istanbul, derived from interviews with founders of four citizen-led urban initiatives and one municipal representative in each city, provided valuable insights to possibilities and constraints for enabling practices. On the one hand, diverging outcomes emphasize the extreme oppositional features of both cases (Stockholm and Istanbul). Firstly, both cities’ characteristics are drastically different. Secondly, the ideologies and motifs for the initiatives to emerge vary widely. Thirdly, the major constraints for citizen-led urban initiatives to relate to the municipality are considerably different. Two types of municipality’s organizational structures produce different underlying mechanisms which demonstrate the constraints. The first municipal organizational structure is steered by bureaucracy (Stockholm). It produces an administrative division that brings up constraints such as the lack of responsibility, transparency and continuity by municipal representatives. The second structure is dominated by municipal politics and governmental hierarchy (Istanbul). It produces informality, lack of transparency and a fragmented civil society. In order to cope with the constraints produced by both types of organizational structures, the initiatives have adjusted their organization to the municipality’s underlying structures. On the other hand, this paper has in fact also come to a rather unifying conclusion. Interestingly, the suggested possibilities for an enabling relationship underline converging new urban governance arrangements. This could imply that for the two varying types of municipality’s organizational structures there is an accurate governance structure. Namely, the combination of a neighborhood council with a municipal guide, with allowance for the initiatives to adopt a politicizing attitude is found as coinciding. Especially its combination appears key to redeem varying constraints. A municipal guide steers the initiatives through bureaucratic struggles, is supported by coproduction methods, while it balances out municipal politics. Next, a neighborhood council, that is politically neutral and run by local citizens, can function as an umbrella for citizen-led urban initiatives. What is crucial is that it should cater for a more entangled relationship between municipalities and initiatives with enhanced involvement of the initiatives in decision-making processes and limited involvement of prevailing constraints pointed out in this research.Keywords: bottom-up urban development, governance innovation, Istanbul, Stockholm
Procedia PDF Downloads 219188 The Role of Creative Works Dissemination Model in EU Copyright Law Modernization
Authors: Tomas Linas Šepetys
Abstract:
In online content-sharing service platforms, the ability of creators to restrict illicit use of audiovisual creative works has effectively been abolished, largely due to specific infrastructure where a huge volume of copyrighted audiovisual content can be made available to the public. The European Union legislator has attempted to strengthen the positions of creators in the realm of online content-sharing services. Article 17 of the new Digital Single Market Directive considers online content-sharing service providers to carry out acts of communication to the public of any creative content uploaded to their platforms by users and posits requirements to obtain licensing agreements. While such regulation intends to assert authors‘ ability to effectively control the dissemination of their creative works, it also creates threats of parody content overblocking through automated content monitoring. Such potentially paradoxical outcome of the efforts of the EU legislator to deliver economic safeguards for the creators in the online content-sharing service platforms leads to presume lack of informity on legislator‘s part regarding creative works‘ economic exploitation opportunities provided to creators in the online content-sharing infrastructure. Analysis conducted in this scientific research discloses that the aforementioned irregularities of parody and other creative content dissemination are caused by EU legislators‘ lack of assessment of value extraction conditions for parody creators in the online content-sharing service platforms. Historical and modeling research method application reveals the existence of two creative content dissemination models and their unique mechanisms of commercial value creation. Obligations to obtain licenses and liability over creative content uploaded to their platforms by users set in Article 17 of the Digital Single Market Directive represent technological replication of the proprietary dissemination model where the creator is able to restrict access to creative content apart from licensed retail channels. The online content-sharing service platforms represent an open dissemination model where the economic potential of creative content is based on the infrastructure of unrestricted access by users and partnership with advertising services offered by the platform. Balanced modeling of proprietary dissemination models in such infrastructure requires not only automated content monitoring measures but also additional regulatory monitoring solutions to separate parody and other types of creative content. An example of the Digital Single Market Directive proves that regulation can dictate not only the technological establishment of a proprietary dissemination model but also a partial reduction of the open dissemination model and cause a disbalance between the economic interests of creators relying on such models. The results of this scientific research conclude an informative role of the creative works dissemination model in the EU copyright law modernization process. A thorough understanding of the commercial prospects of the open dissemination model intrinsic to the online content-sharing service platform structure requires and encourages EU legislators to regulate safeguards for parody content dissemination. Implementing such safeguards would result in a common application of proprietary and open dissemination models in the online content-sharing service platforms and balanced protection of creators‘ economic interests explicitly based on those creative content dissemination models.Keywords: copyright law, creative works dissemination model, digital single market directive, online content-sharing services
Procedia PDF Downloads 74187 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality
Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan
Abstract:
Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application
Procedia PDF Downloads 73186 Tailoring Workspaces for Generation Z: Harmonizing Teamwork, Privacy, and Connectivity
Authors: Maayan Nakash
Abstract:
The modern workplace is undergoing a revolution, with Generation Z (Gen-Z) at the forefront of this transformative shift. However, empirical investigations specifically targeting the workplace preferences of this generation remain limited. Through direct examination of their tendencies via a survey approach, this study offers vital insights for aligning organizational policies and practices. The results presented in this paper are part of a comprehensive study that explored Gen Z's viewpoints on various employment market aspects, likely to decisively influence the design of future work environments. Data were collected via an online survey distributed among a cohort of 461 individuals from Gen-Z, born between the mid-1990s and 2010, consisting of 241 males (52.28%) and 220 females (47.72%). Responses were gauged using Likert scale statements that probed preferences for teamwork versus individual work, virtual versus personal interactions, and open versus private workspaces. Descriptive statistics and analytical analyses were conducted to pinpoint key patterns. We discovered that a high proportion of respondents (81.99%, n=378) exhibited a preference for teamwork over individual work. Correspondingly, the data indicate strong support for the recognition of team-based tasks as a tool contributing to personal and professional development. In terms of communication, the majority of respondents (61.38%) either disagreed (n=154) or slightly agreed (n=129) with the exclusive reliance on virtual interactions with their organizational peers. This finding underscores that despite technological progress, digital natives place significant value on physical interaction and non-mediated communication. Moreover, we understand that they also value a quiet and private work environment, clearly preferring it over open and shared workspaces. Considering that Gen-Z does not necessarily experience high levels of stress within social frameworks in the workplace, this can be attributed to a desire for a space that allows for focused engagement with work tasks. A One-Sample Chi-Square Test was performed on the observed distribution of respondents' reactions to each examined statement. The results showed statistically significant deviations from a uniform distribution (p<.001), indicating that the response patterns did not occur by chance and that there were meaningful tendencies in the participants' responses. The findings expand the theoretical knowledge base on human resources in the dynamics of a multi-generational workforce, illuminating the values, approaches, and expectations of Gen-Z. Practically, the results may lead organizations to equip themselves with tools to create policies tailored to Gen-Z in the context of workspaces and social needs, which could potentially foster a fertile environment and aid in attracting and retaining young talent. Future studies might include investigating potential mitigating factors, such as cultural influences or individual personality traits, which could further clarify the nuances in Gen-Z's work style preferences. Longitudinal studies tracking changes in these preferences as the generation matures may also yield valuable insights. Ultimately, as the landscape of the workforce continues to evolve, ongoing investigations into the unique characteristics and aspirations of emerging generations remain essential for nurturing harmonious, productive, and future-ready organizational environments.Keywords: workplace, future of work, generation Z, digital natives, human resources management
Procedia PDF Downloads 53185 Infrared Spectroscopy Fingerprinting of Herbal Products- Application of the Hypericum perforatum L. Supplements
Authors: Elena Iacob, Marie-Louise Ionescu, Elena Ionescu, Carmen Elena Tebrencu, Oana Teodora Ciuperca
Abstract:
Infrared spectroscopy (FT-IR) is an advanced technique frequently used to authenticate both raw materials and final products using their specific fingerprints and to determine plant extracts biomarkers based on their functional groups. In recent years the market for Hypericum has grown rapidly and also has grown the cases of adultery/replacement, especially for Hypericum perforatum L.specie. Presence/absence of same biomarkers provides preliminary identification of Hypericum species in safe use in the manufacture of food supplements. The main objective of the work was to characterize the main biomarkers of Hypericum perforatum L. (St. John's wort) and identify this species in herbal food supplements after specific FT-IR fingerprint. An experimental program has been designed in order to test: (1) raw material (St. John's wort); (2)intermediate raw materials (St. John's wort dry extract ); (3) the finished products: tablets based on powders, on extracts, on powder and extract, hydroalcoholic solution from herbal mixture based on St. John's wort. The analyze using FTIR infrared spectroscopy were obtained raw materials, intermediates and finished products spectra, respectively absorption bands corresponding and similar with aliphatic and aromatic structures; examination was done individually and through comparison between Hypericum perforatum L. plant species and finished product The tests were done in correlation with phytochemical markers for authenticating the specie Hypericum perforatum L.: hyperoside, rutin, quercetin, isoquercetin, luteolin, apigenin, hypericin, hyperforin, chlorogenic acid. Samples were analyzed using a Shimatzu FTIR spectrometer and the infrared spectrum of each sample was recorded in the MIR region, from 4000 to 1000 cm-1 and then the fingerprint region was selected for data analysis. The following functional groups were identified -stretching vibrations suggests existing groups in the compounds of interest (flavones–rutin, hyperoside, polyphenolcarboxilic acids - chlorogenic acid, naphtodianthrones- hypericin): oxidril groups (OH) free alcohol type: rutin, hyperoside, chlorogenic acid; C = O bond from structures with free carbonyl groups of aldehyde, ketone, carboxylic, ester: hypericin; C = O structure with the free carbonyl of the aldehyde groups, ketone, carboxylic acid, esteric/C = O free bonds present in chlorogenic acid; C = C bonds of the aromatic ring (condensed aromatic hydrocarbons, heterocyclic compounds) present in all compounds of interest; OH phenolic groups: present in all compounds of interest, C-O-C groups from glycoside structures: rutin, hyperoside, chlorogenic acid. The experimental results show that: (I)The six fingerprint region analysis indicated the presence of specific functional groups: (1) 1000 - 1130 cm-1 (C-O–C of glycoside structures); (2) 1200-1380 cm-1 (carbonyl C-O or O-H phenolic); (3) 1400-1450 cm-1 (C=C aromatic); (4) 1600- 1730 cm-1 (C=O carbonyl); (5) 2850 - 2930 cm-1 (–CH3, -CH2-, =CH-); (6) 338-3920 cm-1 (OH free alcohol type); (II)Comparative FT-IR spectral analysis indicate the authenticity of the finished products ( tablets) in terms of Hypericum perforatum L. content; (III)The infrared spectroscopy is an adequate technique for identification and authentication of the medicinal herbs , intermediate raw material and in the food supplements less in the form of solutions where the results are not conclusive.Keywords: Authentication, FT-IR fingerprint, Herbal supplements, Hypericum perforatum L.
Procedia PDF Downloads 375184 Understanding New Zealand’s 19th Century Timber Churches: Techniques in Extracting and Applying Underlying Procedural Rules
Authors: Samuel McLennan, Tane Moleta, Andre Brown, Marc Aurel Schnabel
Abstract:
The development of Ecclesiastical buildings within New Zealand has produced some unique design characteristics that take influence from both international styles and local building methods. What this research looks at is how procedural modelling can be used to define such common characteristics and understand how they are shared and developed within different examples of a similar architectural style. This will be achieved through the creation of procedural digital reconstructions of the various timber Gothic Churches built during the 19th century in the city of Wellington, New Zealand. ‘Procedural modelling’ is a digital modelling technique that has been growing in popularity, particularly within the game and film industry, as well as other fields such as industrial design and architecture. Such a design method entails the creation of a parametric ‘ruleset’ that can be easily adjusted to produce many variations of geometry, rather than a single geometry as is typically found in traditional CAD software. Key precedents within this area of digital heritage includes work by Haegler, Müller, and Gool, Nicholas Webb and Andre Brown, and most notably Mark Burry. What these precedents all share is how the forms of the reconstructed architecture have been generated using computational rules and an understanding of the architects’ geometric reasoning. This is also true within this research as Gothic architecture makes use of only a select range of forms (such as the pointed arch) that can be accurately replicated using the same standard geometric techniques originally used by the architect. The methodology of this research involves firstly establishing a sample group of similar buildings, documenting the existing samples, researching any lost samples to find evidence such as architectural plans, photos, and written descriptions, and then culminating all the findings into a single 3D procedural asset within the software ‘Houdini’. The end result will be an adjustable digital model that contains all the architectural components of the sample group, such as the various naves, buttresses, and windows. These components can then be selected and arranged to create visualisations of the sample group. Because timber gothic churches in New Zealand share many details between designs, the created collection of architectural components can also be used to approximate similar designs not included in the sample group, such as designs found beyond the Wellington Region. This creates an initial library of architectural components that can be further expanded on to encapsulate as wide of a sample size as desired. Such a methodology greatly improves upon the efficiency and adjustability of digital modelling compared to current practices found in digital heritage reconstruction. It also gives greater accuracy to speculative design, as a lack of evidence for lost structures can be approximated using components from still existing or better-documented examples. This research will also bring attention to the cultural significance these types of buildings have within the local area, addressing the public’s general unawareness of architectural history that is identified in the Wellington based research ‘Moving Images in Digital Heritage’ by Serdar Aydin et al.Keywords: digital forensics, digital heritage, gothic architecture, Houdini, procedural modelling
Procedia PDF Downloads 131