Search results for: indigenous knowledge systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16338

Search results for: indigenous knowledge systems

1458 Ecological Evaluation and Conservation Strategies of Economically Important Plants in Indian Arid Zone

Authors: Sher Mohammed, Purushottam Lal, Pawan K. Kasera

Abstract:

The Thar Desert of Rajasthan covers a wide geographical area spreading between 23.3° to 30.12°, North latitude and 69.3◦ to 76◦ Eastern latitudes; having a unique spectrum of arid zone vegetation. This desert is spreading over 12 districts having a rich source of economically important/threatened plant diversity interacting and growing with adverse climatic conditions of the area. Due to variable geological, physiographic, climatic, edaphic and biotic factors, the arid zone medicinal flora exhibit a wide collection of angiosperm families. The herbal diversity of this arid region is medicinally important in household remedies among tribal communities as well as in traditional systems. The on-going increasing disturbances in natural ecosystems are due to climatic and biological, including anthropogenic factors. The unique flora and subsequently dependent faunal diversity of the desert ecosystem is losing its biotic potential. A large number of plants have no future unless immediate steps are taken to arrest the causes, leading to their biological improvement. At present the potential loss in ecological amplitude of various genera and species is making several plant species as red listed plants of arid zone vegetation such as Commmiphora wightii, Tribulus rajasthanensis, Calligonum polygonoides, Ephedra foliata, Leptadenia reticulata, Tecomella undulata, Blepharis sindica, Peganum harmala, Sarcostoma vinimale, etc. Mostly arid zone species are under serious pressure against prevailing ecosystem factors to continuation their life cycles. Genetic, molecular, cytological, biochemical, metabolic, reproductive, germination etc. are the several points where the floral diversity of the arid zone area is facing severe ecological influences. So, there is an urgent need to conserve them. There are several opportunities in the field to carry out remarkable work at particular levels to protect the native plants in their natural habitat instead of only their in vitro multiplication.

Keywords: ecology, evaluation, xerophytes, economically, threatened plants, conservation

Procedia PDF Downloads 267
1457 Data Management System for Environmental Remediation

Authors: Elizaveta Petelina, Anton Sizo

Abstract:

Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.

Keywords: data management, environmental remediation, geographic information system, GIS, decision making

Procedia PDF Downloads 161
1456 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery

Authors: Forouzan Salehi Fergeni

Abstract:

Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.

Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine

Procedia PDF Downloads 50
1455 A Systematic Review on Orphan Drugs Pricing, and Prices Challenges

Authors: Seyran Naghdi

Abstract:

Background: Orphan drug development is limited by very high costs attributed to the research and development and small size market. How health policymakers address this challenge to consider both supply and demand sides need to be explored for directing the policies and plans in the right way. The price is an important signal for pharmaceutical companies’ profitability and the patients’ accessibility as well. Objective: This study aims to find out the orphan drugs' price-setting patterns and approaches in health systems through a systematic review of the available evidence. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) approach was used. MedLine, Embase, and Web of Sciences were searched via appropriate search strategies. Through Medical Subject Headings (MeSH), the appropriate terms for pricing were 'cost and cost analysis', and it was 'orphan drug production', and 'orphan drug', for orphan drugs. The critical appraisal was performed by the Joanna-Briggs tool. A Cochrane data extraction form was used to obtain the data about the studies' characteristics, results, and conclusions. Results: Totally, 1,197 records were found. It included 640 hits from Embase, 327 from Web of Sciences, and 230 MedLine. After removing the duplicates, 1,056 studies remained. Of them, 924 studies were removed in the primary screening phase. Of them, 26 studies were included for data extraction. The majority of the studies (>75%) are from developed countries, among them, approximately 80% of the studies are from European countries. Approximately 85% of evidence has been produced in the recent decade. Conclusions: There is a huge variation of price-setting among countries, and this is related to the specific pharmacological market structure and the thresholds that governments want to intervene in the process of pricing. On the other hand, there is some evidence on the availability of spaces to reduce the very high costs of orphan drugs development through an early agreement between pharmacological firms and governments. Further studies need to focus on how the governments could incentivize the companies to agree on providing the drugs at lower prices.

Keywords: orphan drugs, orphan drug production, pricing, costs, cost analysis

Procedia PDF Downloads 163
1454 The Study of Fine and Nanoscale Gold in the Ores of Primary Deposits and Gold-Bearing Placers of Kazakhstan

Authors: Omarova Gulnara, Assubayeva Saltanat, Tugambay Symbat, Bulegenov Kanat

Abstract:

The article discusses the problem of developing a methodology for studying thin and nanoscale gold in ores and placers of primary deposits, which will allow us to develop schemes for revealing dispersed gold inclusions and thus improve its recovery rate to increase the gold reserves of the Republic of Kazakhstan. The type of studied gold, is characterized by a number of features. In connection with this, the conditions of its concentration and distribution in ore bodies and formations, as well as the possibility of reliably determining it by "traditional" methods, differ significantly from that of fine gold (less than 0.25 microns) and even more so from that of larger grains. The mineral composition of rocks (metasomatites) and gold ore and the mineralization associated with them were studied in detail on the Kalba ore field in Kazakhstan. Mineralized zones were identified, and samples were taken from them for analytical studies. The research revealed paragenetic relationships of newly formed mineral formations at the nanoscale, which makes it possible to clarify the conditions for the formation of deposits with a particular type of mineralization. This will provide significant assistance in developing a scheme for study. Typomorphic features of gold were revealed, and mechanisms of formation and aggregation of gold nanoparticles were proposed. The presence of a large number of particles isolated at the laboratory stage from concentrates of gravitational enrichment can serve as an indicator of the presence of even smaller particles in the object. Even the most advanced devices based on gravitational methods for gold concentration provide extraction of metal at a level of around 50%, while pulverized metal is extracted much worse, and gold of less than 1 micron size is extracted at only a few percent. Therefore, when particles of gold smaller than 10 microns are detected, their actual numbers may be significantly higher than expected. In particular, at the studied sites, enrichment of slurry and samples with volumes up to 1 m³ was carried out using a screw lock or separator to produce a final concentrate weighing up to several kilograms. Free gold particles were extracted from the concentrates in the laboratory using a number of processes (magnetic and electromagnetic separation, washing with bromoform in a cup to obtain an ultracontentrate, etc.) and examined under electron microscopes to investigate the nature of their surface and chemical composition. The main result of the study was the detection of gold nanoparticles located on the surface of loose metal grains. The most characteristic forms of gold secretions are individual nanoparticles and aggregates of different configurations. Sometimes, aggregates form solid dense films, deposits, and crusts, all of which are confined to the negative forms of the nano- and microrelief on the surfaces of golden. The results will provide significant knowledge about the prevalence and conditions for the distribution of fine and nanoscale gold in Kazakhstan deposits, as well as the development of methods for studying it, which will minimize losses of this type of gold during extraction. Acknowledgments: This publication has been produced within the framework of the Grant "Development of methodology for studying fine and nanoscale gold in ores of primary deposits, placers and products of their processing" (АР23485052, №235/GF24-26).

Keywords: electron microscopy, microminerology, placers, thin and nanoscale gold

Procedia PDF Downloads 21
1453 Common Space Production as a Solution to the Affordable Housing Problem: Its Relationship with the Squating Process in Turkey

Authors: Gözde Arzu Sarıcan

Abstract:

Contemporary urbanization processes and spatial transformations are intensely debated across various fields of social sciences. One prominent concept in these discussions is "common spaces." Common spaces offer a critical theoretical framework, particularly for addressing the social and economic inequalities brought about by urbanization. This study examines the processes of commoning and their impacts through the lens of squatter neighborhoods in Turkey, emphasizing the importance of affordable housing. It focuses on the role and significance of these neighborhoods in the formation of common spaces, analyzing the collective actions and resistance strategies of residents. This process, which began with the construction of shelters to meet the shelter needs of low-income households migrating from rural to urban areas, has turned into low-quality squatter settlements over time. For low-income households lacking the economic power to rent or buy homes in the city, these areas provided an affordable housing solution. Squatter neighborhoods reflect the efforts of local communities to protect and develop their communal living spaces through collective actions and resistance strategies. This collective creation process involves the appropriation of occupied land as a common resource through the rules established by the commons. Organized occupations subdivide these lands, shaped through collective creation processes. For the squatter communities striving for economic and social adaptation, these areas serve as buffer zones for urban integration. In squatter neighborhoods, bonds of friendship, kinship, and compatriotism are strong, playing a significant role in the creation and dissemination of collective knowledge. Squatter areas can be described as common spaces that emerge out of necessity for low-income and marginalized groups. The design and construction of housing in squatter neighborhoods are shaped by the collective participation and skills of the residents. Streets are formed through collective decision-making and labor. Over time, the demands for housing are communicated to local authorities, enhancing the potential for commoning. Common spaces are shaped by collective needs and demands, appropriated, and transformed into potential new spaces. Common spaces are continually redefined and recreated. In this context, affordable housing becomes an essential aspect of these common spaces, providing a foundation for social and economic stability. This study evaluates the processes of commoning and their effects through the lens of squatter neighborhoods in Turkey. Communities living in squatter neighborhoods have managed to create and protect communal living spaces, especially in situations where official authorities have been inadequate. Common spaces are built on values such as solidarity, cooperation, and collective resistance. In urban planning and policy development processes, it is crucial to consider the concept of common spaces. Policies that support the collective efforts and resistance strategies of communities can contribute to more just and sustainable living conditions in urban areas. In this context, the concept of common spaces is considered an important tool in the fight against urban inequalities and in the expression and defense mechanisms of communities. By emphasizing the importance of affordable housing within these spaces, this study highlights the critical role of common spaces in addressing urban social and economic challenges.

Keywords: affordable housing, common space, squating process, turkey

Procedia PDF Downloads 32
1452 Fire Resilient Cities: The Impact of Fire Regulations, Technological and Community Resilience

Authors: Fanny Guay

Abstract:

Building resilience, sustainable buildings, urbanization, climate change, resilient cities, are just a few examples of where the focus of research has been in the last few years. It is obvious that there is a need to rethink how we are building our cities and how we are renovating our existing buildings. However, the question remaining is how can we assure that we are building sustainable yet resilient cities? There are many aspects one can touch upon when discussing resilience in cities, but after the event of Grenfell in June 2017, it has become clear that fire resilience must be a priority. We define resilience as a holistic approach including communities, society and systems, focusing not only on resisting the effects of a disaster, but also how it will cope and recover from it. Cities are an example of such a system, where components such as buildings have an important role to play. A building on fire will have an impact on the community, the economy, the environment, and so the entire system. Therefore, we believe that fire and resilience go hand in hand when we discuss building resilient cities. This article aims at discussing the current state of the concept of fire resilience and suggests actions to support the built of more fire resilient buildings. Using the case of Grenfell and the fire safety regulations in the UK, we will briefly compare the fire regulations in other European countries, more precisely France, Germany and Denmark, to underline the difference and make some suggestions to increase fire resilience via regulation. For this research, we will also include other types of resilience such as technological resilience, discussing the structure of buildings itself, as well as community resilience, considering the role of communities in building resilience. Our findings demonstrate that to increase fire resilience, amending existing regulations might be necessary, for example, how we performed reaction to fire tests and how we classify building products. However, as we are looking at national regulations, we are only able to make general suggestions for improvement. Another finding of this research is that the capacity of the community to recover and adapt after a fire is also an essential factor. Fundamentally, fire resilience, technological resilience and community resilience are closely connected. Building resilient cities is not only about sustainable buildings or energy efficiency; it is about assuring that all the aspects of resilience are included when building or renovating buildings. We must ask ourselves questions as: Who are the users of this building? Where is the building located? What are the components of the building, how was it designed and which construction products have been used? If we want to have resilient cities, we must answer these basic questions and assure that basic factors such as fire resilience are included in our assessment.

Keywords: buildings, cities, fire, resilience

Procedia PDF Downloads 170
1451 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 114
1450 Artificial Intelligence in Management Simulators

Authors: Nuno Biga

Abstract:

Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.

Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant

Procedia PDF Downloads 104
1449 Dermatomyositis: It is Not Always an Allergic Reaction

Authors: Irfan Abdulrahman Sheth, Sohil Pothiawala

Abstract:

Dermatomyositis is an idiopathic inflammatory myopathy, traditionally characterized by a progressive, symmetrical proximal muscle weakness and pathognomonic or characteristic cutaneous manifestations. We report a case of a 60-year old Chinese female who was referred from polyclinic for allergic rash over the body after applying hair dye 3 weeks ago. It was associated with puffiness of face, shortness of breath and hoarse voice since last 2 weeks with decrease effort tolerance. She also complained of dysphagia/ myalgia with progressive weakness of proximal muscles and palpitations. She denied chest pain, loss of appetite, weight loss, orthopnea or fever. She had stable vital signs and appeared cushingoid. She was noted to have rash over the scalp/ face and ecchymosis over the right arm with puffiness of face and periorbital oedema. There was symmetrical muscle weakness and other neurological examination was normal. Initial impression was of allergic reaction and underlying nephrotic syndrome and Cushing’s syndrome from TCM use. Diagnostic tests showed high Creatinine kinase (CK) of 1463 u/l, CK–MB of 18.7 ug/l and Troponin –T of 0.09 ug/l. The Full blood count and renal panel was normal. EMG showed inflammatory myositis. Patient was managed by rheumatologist and discharged on oral prednisolone with methotrexate/ ergocalciferol capsule and calcium carb, vitamin D tablets and outpatient follow up. In some patients, cutaneous disease exists in the absence of objective evidence of muscle inflammation. Management of dermatomyositis begins with careful investigation for the presence of muscle disease or of additional systemic involvement, particularly of the pulmonary, cardiac or gastrointestinal systems, and for the possibility of an accompanying malignancy. Muscle disease and systemic involvement can be refractory and may require multiple sequential therapeutic interventions or, at times, combinations of therapies. Thus, we want to highlight to the physicians that the cutaneous disease of dermatomyositis should not be confused with allergic reaction. It can be particularly challenging to diagnose. Early recognition aids appropriate management of this group of patients.

Keywords: dermatomyositis, myopathy, allergy, cutaneous disease

Procedia PDF Downloads 335
1448 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics

Authors: Varun Kumar, Chandra Shakher

Abstract:

Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.

Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy

Procedia PDF Downloads 498
1447 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing

Authors: Leonie Bradfield, Stephen Fityus, John Simmons

Abstract:

The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.

Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump

Procedia PDF Downloads 161
1446 Conjunctive Management of Surface and Groundwater Resources under Uncertainty: A Retrospective Optimization Approach

Authors: Julius M. Ndambuki, Gislar E. Kifanyi, Samuel N. Odai, Charles Gyamfi

Abstract:

Conjunctive management of surface and groundwater resources is a challenging task due to the spatial and temporal variability nature of hydrology as well as hydrogeology of the water storage systems. Surface water-groundwater hydrogeology is highly uncertain; thus it is imperative that this uncertainty is explicitly accounted for, when managing water resources. Various methodologies have been developed and applied by researchers in an attempt to account for the uncertainty. For example, simulation-optimization models are often used for conjunctive water resources management. However, direct application of such an approach in which all realizations are considered at each iteration of the optimization process leads to a very expensive optimization in terms of computational time, particularly when the number of realizations is large. The aim of this paper, therefore, is to introduce and apply an efficient approach referred to as Retrospective Optimization Approximation (ROA) that can be used for optimizing conjunctive use of surface water and groundwater over a multiple hydrogeological model simulations. This work is based on stochastic simulation-optimization framework using a recently emerged technique of sample average approximation (SAA) which is a sampling based method implemented within the Retrospective Optimization Approximation (ROA) approach. The ROA approach solves and evaluates a sequence of generated optimization sub-problems in an increasing number of realizations (sample size). Response matrix technique was used for linking simulation model with optimization procedure. The k-means clustering sampling technique was used to map the realizations. The methodology is demonstrated through the application to a hypothetical example. In the example, the optimization sub-problems generated were solved and analysed using “Active-Set” core optimizer implemented under MATLAB 2014a environment. Through k-means clustering sampling technique, the ROA – Active Set procedure was able to arrive at a (nearly) converged maximum expected total optimal conjunctive water use withdrawal rate within a relatively few number of iterations (6 to 7 iterations). Results indicate that the ROA approach is a promising technique for optimizing conjunctive water use of surface water and groundwater withdrawal rates under hydrogeological uncertainty.

Keywords: conjunctive water management, retrospective optimization approximation approach, sample average approximation, uncertainty

Procedia PDF Downloads 231
1445 Hindrances to Effective Delivery of Infrastructural Development Projects in Nigeria’s Built Environment

Authors: Salisu Gidado Dalibi, Sadiq Gumi Abubakar, JingChun Feng

Abstract:

Nigeria’s population is about 190 million and is on the increase annually making it the seventh most populated nation in the world and first in Africa. This population growth comes with its prospects, needs, and challenges especially on the existing and future infrastructure. Infrastructure refers to structures, systems, and facilities serving the economy of a country, city, town, businesses, industries, etc. These include roads, railways lines, bridges, tunnels, ports, stadiums, dams and water projects, power generation plants and distribution grids, information, and communication technology (ICT), etc. The Nigerian government embarked on several infrastructural development projects (IDPs) to address the deficit as the present infrastructure cannot cater to the needs nor sustain the country. However, delivering such IDPs have not been smooth; comes with challenges from within and outside the project; frequent delays and abandonment. Thus, affecting all the stakeholders involved. Hence, the aim of this paper is to identify and assess the factors that are hindering the effective delivery of IDPs in Nigeria’s built environment with the view to offer more insight into such factors, and ways to address them. The methodology adopted in this study involves the use of secondary sources of data from several materials (official publications, journals, newspapers, internet, etc.) were reviewed within the IDPs field by laying more emphasis on Nigeria’s cases. The hindrance factors in this regard were identified which forms the backbone of the questionnaire. A pilot survey was used to test its suitability; after which it was randomly administered to various project professionals in Nigeria’s construction industry using a 5-point Likert scale format to ascertain the impact of these hindrances. Cronbach’s Alpha reliability test, mean item score computations, relative importance indices, T-test, Chi-Square statistics were used for data analyses. The results outline the impact of various internal, external and project related factors that are hindering IDPs within Nigeria’s built environment.

Keywords: built environment, development, factors, hindrances, infrastructure, Nigeria, project

Procedia PDF Downloads 177
1444 Formulating a Definition of Hate Speech: From Divergence to Convergence

Authors: Avitus A. Agbor

Abstract:

Numerous incidents, ranging from trivial to catastrophic, do come to mind when one reflects on hate. The victims of these belong to specific identifiable groups within communities. These experiences evoke discussions on Islamophobia, xenophobia, homophobia, anti-Semitism, racism, ethnic hatred, atheism, and other brutal forms of bigotry. Common to all these is an invisible but portent force that drives all of them: hatred. Such hatred is usually fueled by a profound degree of intolerance (to diversity) and the zeal to impose on others their beliefs and practices which they consider to be the conventional norm. More importantly, the perpetuation of these hateful acts is the unfortunate outcome of an overplay of invectives and hate speech which, to a greater extent, cannot be divorced from hate. From a legal perspective, acknowledging the existence of an undeniable link between hate speech and hate is quite easy. However, both within and without legal scholarship, the notion of “hate speech” remains a conundrum: a phrase that is quite easily explained through experiences than propounding a watertight definition that captures the entire essence and nature of what it is. The problem is further compounded by a few factors: first, within the international human rights framework, the notion of hate speech is not used. In limiting the right to freedom of expression, the ICCPR simply excludes specific kinds of speeches (but does not refer to them as hate speech). Regional human rights instruments are not so different, except for the subsequent developments that took place in the European Union in which the notion has been carefully delineated, and now a much clearer picture of what constitutes hate speech is provided. The legal architecture in domestic legal systems clearly shows differences in approaches and regulation: making it more difficult. In short, what may be hate speech in one legal system may very well be acceptable legal speech in another legal system. Lastly, the cornucopia of academic voices on the issue of hate speech exude the divergence thereon. Yet, in the absence of a well-formulated and universally acceptable definition, it is important to consider how hate speech can be defined. Taking an evidence-based approach, this research looks into the issue of defining hate speech in legal scholarship and how and why such a formulation is of critical importance in the prohibition and prosecution of hate speech.

Keywords: hate speech, international human rights law, international criminal law, freedom of expression

Procedia PDF Downloads 76
1443 Advancing Urban Sustainability through the Integration of Planning Evaluation Methodologies

Authors: Natalie Rosales

Abstract:

Based on an ethical vision which recognizes the vital role of human rights, shared values, social responsibility and justice, and environmental ethics, planning may be interpreted as a process aimed at reducing inequalities and overcoming marginality. Seen from this sustainability perspective, planning evaluation must utilize critical-evaluative and narrative receptive models which assist different stakeholders in their understanding of urban fabric while trigger reflexive processes that catalyze wider transformations. In this paper, this approach servers as a guide for the evaluation of Mexico´s urban planning systems, and postulates a framework to better integrate sustainability notions into planning evaluation. The paper is introduced by an overview of the current debate on evaluation in urban planning. The state of art presented includes: the different perspectives and paradigms of planning evaluation and their fundamentals and scope, which have focused on three main aspects; goal attainment (did planning instruments do what they were supposed to?); performance and effectiveness of planning (retrospective analysis of planning process and policy analysis assessment); and the effects of process-considering decision problems and contexts rather than the techniques and methods. As well as, methodological innovations and improvements in planning evaluation. This comprehensive literature review provides the background to support the authors’ proposal for a set of general principles to evaluate urban planning, grounded on a sustainability perspective. In the second part the description of the shortcomings of the approaches to evaluate urban planning in Mexico set the basis for highlighting the need of regulatory and instrumental– but also explorative- and collaborative approaches. As a response to the inability of these isolated methods to capture planning complexity and strengthen the usefulness of evaluation process to improve the coherence and internal consistency of the planning practice itself. In the third section the general proposal to evaluate planning is described in its main aspects. It presents an innovative methodology for establishing a more holistic and integrated assessment which considers the interdependence between values, levels, roles and methods, and incorporates different stakeholders in the evaluation process. By doing so, this piece of work sheds light on how to advance urban sustainability through the integration of evaluation methodologies into planning.

Keywords: urban planning, evaluation methodologies, urban sustainability, innovative approaches

Procedia PDF Downloads 476
1442 Nanopack: A Nanotechnology-Based Antimicrobial Packaging Solution for Extension of Shelf Life and Food Safety

Authors: Andy Sand, Naama Massad – Ivanir, Nadav Nitzan, Elisa Valderrama, Alfred Wegenberger, Koranit Shlosman, Rotem Shemesh, Ester Segal

Abstract:

Microbial spoilage of food products is of great concern in the food industry due to the direct impact on the shelf life of foods and the risk of foodborne illness. Therefore, food packaging may serve as a crucial contribution to keep the food fresh and suitable for consumption. Active packaging solutions that have the ability to inhibit the development of microorganism in food products attract a lot of interest, and many efforts have been made to engineer and assimilate such solutions on various food products. NanoPack is an EU-funded international project aiming to develop state-of-the-art antimicrobial packaging systems for perishable foods. The project is based on natural essential oils which possess significant antimicrobial activity against many bacteria, yeasts and molds. The essential oils are encapsulated in natural aluminosilicate clays, halloysite nanotubes (HNT's), that serves as a carrier for the volatile essential oils and enable their incorporation into polymer films. During the course of the project, several polyethylene films with diverse essential oils combinations were designed based on the characteristics of their target food products. The antimicrobial activity of the produced films was examined in vitro on a broad spectrum of microorganisms including gram-positive and gram-negative bacteria, aerobic and anaerobic bacteria, yeasts and molds. The films that showed promising in vitro results were successfully assimilated on in vivo active packaging of several food products such as cheese, bread, fruits and raw meat. The results of the in vivo analyses showed significant inhibition of the microbial spoilage, indicating the strong contribution of the NanoPack packaging solutions on the extension of shelf life and reduction of food waste caused by early spoilage throughout the supply chain.

Keywords: food safety, food packaging, essential oils, nanotechnology

Procedia PDF Downloads 138
1441 Role of Autophagic Lysosome Reformation for Cell Viability in an in vitro Infection Model

Authors: Muhammad Awais Afzal, Lorena Tuchscherr De Hauschopp, Christian Hübner

Abstract:

Introduction: Autophagy is an evolutionarily conserved lysosome-dependent degradation pathway, which can be induced by extrinsic and intrinsic stressors in living systems to adapt to fluctuating environmental conditions. In the context of inflammatory stress, autophagy contributes to the elimination of invading pathogens, the regulation of innate and adaptive immune mechanisms, and regulation of inflammasome activity as well as tissue damage repair. Lysosomes can be recycled from autolysosomes by the process of autophagic lysosome reformation (ALR), which depends on the presence of several proteins including Spatacsin. Thus ALR contributes to the replenishment of lysosomes that are available for fusion with autophagosomes in situations of increased autophagic turnover, e.g., during bacterial infections, inflammatory stress or sepsis. Objectives: We aimed to assess whether ALR plays a role for cell survival in an in-vitro bacterial infection model. Methods: Mouse embryonic fibroblasts (MEFs) were isolated from wild-type mice and Spatacsin (Spg11-/-) knockout mice. Wild-type MEFs and Spg11-/- MEFs were infected with Staphylococcus aureus (multiplication of infection (MOI) used was 10). After 8 and 16 hours of infection, cell viability was assessed on BD flow cytometer through propidium iodide intake. Bacterial intake by cells was also calculated by plating cell lysates on blood agar plates. Results: in-vitro infection of MEFs with Staphylococcus aureus showed a marked decrease of cell viability in ALR deficient Spatacsin knockout (Spg11-/-) MEFs after 16 hours of infection as compared to wild-type MEFs (n=3 independent experiments; p < 0.0001) although no difference was observed for bacterial intake by both genotypes. Conclusion: Suggesting that ALR is important for the defense of invading pathogens e.g. S. aureus, we observed a marked increase of cell death in an in-vitro infection model in cells with compromised ALR.

Keywords: autophagy, autophagic lysosome reformation, bacterial infections, Staphylococcus aureus

Procedia PDF Downloads 144
1440 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex

Procedia PDF Downloads 130
1439 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation

Procedia PDF Downloads 199
1438 Poly (3,4-Ethylenedioxythiophene) Prepared by Vapor Phase Polymerization for Stimuli-Responsive Ion-Exchange Drug Delivery

Authors: M. Naveed Yasin, Robert Brooke, Andrew Chan, Geoffrey I. N. Waterhouse, Drew Evans, Darren Svirskis, Ilva D. Rupenthal

Abstract:

Poly(3,4-ethylenedioxythiophene) (PEDOT) is a robust conducting polymer (CP) exhibiting high conductivity and environmental stability. It can be synthesized by either chemical, electrochemical or vapour phase polymerization (VPP). Dexamethasone sodium phosphate (dexP) is an anionic drug molecule which has previously been loaded onto PEDOT as a dopant via electrochemical polymerisation; however this technique requires conductive surfaces from which polymerization is initiated. On the other hand, VPP produces highly organized biocompatible CP structures while polymerization can be achieved onto a range of surfaces with a relatively straight forward scale-up process. Following VPP of PEDOT, dexP can be loaded and subsequently released via ion-exchange. This study aimed at preparing and characterising both non-porous and porous VPP PEDOT structures including examining drug loading and release via ion-exchange. Porous PEDOT structures were prepared by first depositing a sacrificial polystyrene (PS) colloidal template on a substrate, heat curing this deposition and then spin coating it with the oxidant solution (iron tosylate) at 1500 rpm for 20 sec. VPP of both porous and non-porous PEDOT was achieved by exposing to monomer vapours in a vacuum oven at 40 mbar and 40 °C for 3 hrs. Non-porous structures were prepared similarly on the same substrate but without any sacrificial template. Surface morphology, compositions and behaviour were then characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), x-ray photoelectron spectroscopy (XPS) and cyclic voltammetry (CV) respectively. Drug loading was achieved by 50 CV cycles in a 0.1 M dexP aqueous solution. For drug release, each sample was exposed to 20 mL of phosphate buffer saline (PBS) placed in a water bath operating at 37 °C and 100 rpm. Film was stimulated (continuous pulse of ± 1 V at 0.5 Hz for 17 mins) while immersed into PBS. Samples were collected at 1, 2, 6, 23, 24, 26 and 27 hrs and were analysed for dexP by high performance liquid chromatography (HPLC Agilent 1200 series). AFM and SEM revealed the honey comb nature of prepared porous structures. XPS data showed the elemental composition of the dexP loaded film surface, which related well with that of PEDOT and also showed that one dexP molecule was present per almost three EDOT monomer units. The reproducible electroactive nature was shown by several cycles of reduction and oxidation via CV. Drug release revealed success in drug loading via ion-exchange, with stimulated porous and non-porous structures exhibiting a proof of concept burst release upon application of an electrical stimulus. A similar drug release pattern was observed for porous and non-porous structures without any significant statistical difference, possibly due to the thin nature of these structures. To our knowledge, this is the first report to explore the potential of VPP prepared PEDOT for stimuli-responsive drug delivery via ion-exchange. The produced porous structures were ordered and highly porous as indicated by AFM and SEM. These porous structures exhibited good electroactivity as shown by CV. Future work will investigate porous structures as nano-reservoirs to increase drug loading while sealing these structures to minimize spontaneous drug leakage.

Keywords: PEDOT for ion-exchange drug delivery, stimuli-responsive drug delivery, template based porous PEDOT structures, vapour phase polymerization of PEDOT

Procedia PDF Downloads 231
1437 Risk-Sharing Financing of Islamic Banks: Better Shielded against Interest Rate Risk

Authors: Mirzet SeHo, Alaa Alaabed, Mansur Masih

Abstract:

In theory, risk-sharing-based financing (RSF) is considered a corner stone of Islamic finance. It is argued to render Islamic banks more resilient to shocks. In practice, however, this feature of Islamic financial products is almost negligible. Instead, debt-based instruments, with conventional like features, have overwhelmed the nascent industry. In addition, the framework of present-day economic, regulatory and financial reality inevitably exposes Islamic banks in dual banking systems to problems of conventional banks. This includes, but is not limited to, interest rate risk. Empirical evidence has, thus far, confirmed such exposures, despite Islamic banks’ interest-free operations. This study applies system GMM in modeling the determinants of RSF, and finds that RSF is insensitive to changes in interest rates. Hence, our results provide support to the “stability” view of risk-sharing-based financing. This suggests RSF as the way forward for risk management at Islamic banks, in the absence of widely acceptable Shariah compliant hedging instruments. Further support to the stability view is given by evidence of counter-cyclicality. Unlike debt-based lending that inflates artificial asset bubbles through credit expansion during the upswing of business cycles, RSF is negatively related to GDP growth. Our results also imply a significantly strong relationship between risk-sharing deposits and RSF. However, the pass-through of these deposits to RSF is economically low. Only about 40% of risk-sharing deposits are channeled to risk-sharing financing. This raises questions on the validity of the industry’s claim that depositors accustomed to conventional banking shun away from risk sharing and signals potential for better balance sheet management at Islamic banks. Overall, our findings suggest that, on the one hand, Islamic banks can gain ‘independence’ from conventional banks and interest rates through risk-sharing products, the potential for which is enormous. On the other hand, RSF could enable policy makers to improve systemic stability and restrain excessive credit expansion through its countercyclical features.

Keywords: Islamic banks, risk-sharing, financing, interest rate, dynamic system GMM

Procedia PDF Downloads 316
1436 Asymmetric Price Transmission in Rice: A Regional Analysis in Peru

Authors: Renzo Munoz-Najar, Cristina Wong, Daniel De La Torre Ugarte

Abstract:

The literature on price transmission usually deals with asymmetries related to different commodities and/or the short and long term. The role of domestic regional differences and the relationship with asymmetries within a country are usually left out. This paper looks at the asymmetry in the transmission of rice prices from the international price to the farm gate prices in four northern regions of Peru for the last period 2001-2016. These regions are San Martín, Piura, Lambayeque and La Libertad. The relevance of the study lies in its ability to assess the need for policies aimed at improving the competitiveness of the market and ensuring the benefit of producers. There are differences in planting and harvesting dates, as well as in geographic location that justify the hypothesis of the existence of differences in the price transition asymmetries between these regions. Those differences are due to at least three factors geography, infrastructure development, and distribution systems. For this, the Threshold Vector Error Correction Model and the Autoregressive Vector Model with Threshold are used. Both models, collect asymmetric effects in the price adjustments. In this way, it is sought to verify that farm prices react more to falls than increases in international prices due to the high bargaining power of intermediaries. The results of the investigation suggest that the transmission of prices is significant only for Lambayeque and La Libertad. Likewise, the asymmetry in the transmission of prices for these regions is checked. However, these results are not met for San Martin and Piura, the main rice producers nationwide. A significant price transmission is verified only in the Lambayeque and La Libertad regions. San Martin and Piura, in spite of being the main rice producing regions of Peru, do not present a significant transmission of international prices; a high degree of self-sufficient supply might be at the center of the logic for this result. An additional finding is the short-term adjustment with respect to international prices, it is higher in La Libertad compared to Lambayeque, which could be explained by the greater bargaining power of intermediaries in the last-mentioned region due to the greater technological development in the mills.

Keywords: asymmetric price transmission, rice prices, price transmission, regional economics

Procedia PDF Downloads 228
1435 Four-Electron Auger Process for Hollow Ions

Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola

Abstract:

A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.

Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method

Procedia PDF Downloads 153
1434 Ph-Triggered Cationic Solid Lipid Nanoparticles Mitigated Colitis in Mice

Authors: Muhammad Naeem, Juho Lee, Jin-Wook Yoo

Abstract:

In this study, we hypothesized that prolonged gastrointestinal transit at the inflamed colon conferred by a pH-triggered mucoadhesive smart nanoparticulate drug delivery system aids in achieving selective and sustained levels of the drug within the inflamed colon for the treatment of ulcerative colitis. We developed budesonide-loaded pH-sensitive charge-reversal solid lipid nanoparticles (SLNs) using a hot homogenization method. Polyetylenimine (PEI) was used to render SLNs cationic (PEI-SLNs). Eudragit S100 (ES) was coated on PEI-SLNs for pH-trigger charge-reversal SLNs (ES-PEI-SLNs). Therapeutic potential of the prepared SNLs formulation was evaluated in ulcerative colitis in mice. The transmission electron microscopy, zeta size and zeta potential data showed the successful formation of SLNs formulations. SLNs and PEI-SLNs showed burst drug release in acidic pH condition mimicking stomach and early small intestine environment which limiting their application as oral delivery systems. However, ES-PEI-SLNs prevented a burst drug release in acidic pH conditions and showed sustained release at a colonic pH. Most importantly, the surface charge of ES-PEI-SLNs switched from negative to positive in colonic conditions by pH-triggered removal of ES coating and accumulated selectively in inflamed colon. Furthermore, a charge reversal ES-PEI-SLNs showed a superior mitigation of dextran sulfate sodium (DSS)-induced acute colitis in mice as compared to SLNs and PEI-SLNs treated groups. Moreover, histopathological analysis of distal colon sections stained with hematoxylin/eosin and E-cadherin immunostaining revealed attenuated inflammation in an ES-PEI-SLNs-treated group. We also found that ES-PEI-SLNs markedly reduced the myeloperoxidase level and expression of TNF-alpha in colon tissue. Our results suggest that the pH-triggered charge reversal SLNs presented in this study would be a promising approach for ulcerative colitis therapy.

Keywords: solid lipid nanoparticles, stimuli-triggered charge-reversal, ulcerative colitis, methacrylate copolymer, budesonide

Procedia PDF Downloads 248
1433 The Feminine Disruption of Speech and Refounding of Discourse: Kristeva’s Semiotic Chora and Psychoanalysis

Authors: Kevin Klein-Cardeña

Abstract:

For Julia Kristeva, contra Lacan, the instinctive body refuses to go away within discourse. Neither is the pre-Oedipal stage of maternal fusion vanquished by the emergence of language and with it, the law of the father. On the contrary, Kristeva argues, the pre-symbolic ambivalently haunts the society of speech, simultaneously animating and threatening the very foundations of signification. Kristeva invents the term “the semiotic” to refer to this continual breaking-through of the material unconscious onto the scene of meaning. This presentation examines Kristeva’s semiotic as a theoretical gesture that itself is a disruption of discourse, re-presenting the ‘return of the repressed’ body in theory—-the breaking-through of the unconscious onto the science of meaning. Faced with linguistic theories concerned with abstract sign-systems as well as Lacanian doctrine privileging the linguistic sign unequivocally over the bodily drive, Kristeva’s theoretical corpus issues the message of a psychic remainder that disrupts with a view toward replenishing theoretical accounts of language and sense. Reviewing Semiotic challenge across these two levels (the sense and science of language), the presentation suggests that Kristeva’s offerings constitute a coherent gestalt, providing an account of the feminist nature of her dual intervention. In contrast to other feminist critiques, Kristeva’s gesture hinges on its restoration of the maternal contribution to subjectivity. Against the backdrop of ‘phallogocentric’ and ‘necrophilic’ theories that strip language of a subject and strip the subject of a body, Kristeva recasts linguistic study through a metaphor of life and birthing. Yet the semiotic fragments the subject it produces, dialoguing with an unconscious curtailed by but also exceeding the symbolic order of signification. Linguistics, too, becomes fragmented in the same measure as it is more meaningfully renewed by its confrontation with the semiotic body. It is Kristeva’s own body that issues this challenge, on both sides of the boundary between the theory and the theorized. The Semiotic becomes comprehensible as a project unified by its concern to disrupt and rehabilitate language, the subject, and the scholarly discourses that treat them.

Keywords: Julia kristeva, the Semiotic, french feminism, psychoanalysic theory, linguistics

Procedia PDF Downloads 74
1432 The Effect of Expanding the Early Pregnancy Assessment Clinic and COVID-19 on Emergency Department and Urgent Care Visits for Early Pregnancy Bleeding

Authors: Harley Bray, Helen Pymar, Michelle Liu, Chau Pham, Tomislav Jelic, Fran Mulhall

Abstract:

Background: Our study assesses the impact of the COVID-19 pandemic on early pregnancy assessment clinic (EPAC) referrals and the use of virtual consultation in Winnipeg, Manitoba. Our clinic expanded to accept referrals from all Winnipeg Emergency Department (ED)/Urgent Care (UC) sites beginning November 2019 to April 2020. By May 2020, the COVID-19 pandemic reached Manitoba and EPAC virtual care was expanded by performing hCG remotely and reviewing blood and ED/UC ultrasound results by phone. Methods: Emergency Department Information Systems (EDIS) and EPAC data reviewed ED/UC visits for pregnancy <20 weeks and vaginal bleeding 1-year pre-COVID (March 12, 2019, to March 11, 2020) and during COVID (March 12, 2020 (first case in Manitoba) to March 11, 2021). Results: There were fewer patient visits for vaginal bleeding or pregnancy of <20 weeks (4264 vs. 5180), diagnoses of threatened abortion (1895 vs. 2283), and ectopic pregnancy (78 vs. 97) during COVID compared with pre-COVID, respectively. ICD 10 codes were missing in 849 (20%) and 1183 (23%) of patients during COVID and pre-COVID, respectively. Wait times for all patient visits improved during COVID-19 compared to pre-COVID (5.1 ± 4.4 hours vs. 5.5 ± 3.8 hours), more patients received obstetrical ultrasounds, 761 (18%) vs. 787 (15%), and fewer patients returned within 30 days (1360 (32%) vs. 1848 (36%); p<0.01). EPAC saw 708 patients (218; 31% new ED/UC) during COVID-19 compared to 552 (37; 7% new ED/UC) pre-COVID. Fewer operative interventions for pregnancy loss (346 vs. 456) and retained products (236 vs. 272) were noted. Surgeries to treat ectopic pregnancy (106 vs 113) remained stable during the study time interval. Conclusion: Accurate identification of pregnancy complications was difficult, with over 20% missing ICD-10 diagnostic codes. There were fewer ED/UC visits and surgical management for threatened abortion during COVID-19, but ectopic pregnancy operative management remained unchanged.

Keywords: early pregnancy, ultrasound, COVID-19, obstetrics

Procedia PDF Downloads 20
1431 The Admissibility of Evidence Obtained in Contravention of the Right to Privacy in a Criminal Trial: A Comparative Study of Poland and Germany

Authors: Konstancja Syller

Abstract:

International law and European regulations remain hardly silent about the admissibility of evidence obtained illegally in a criminal trial. However, Article 6 of the European Convention on Human Rights guarantees the right to a fair trial, it does not normalise a proceeding status of specified sources or means of proof outright. Therefore, it is the preserve of national legislation and national law enforcement authorities to decide on this matter. In most countries, especially in Germany and Poland, a rather complex normative approach to the issue of proof obtained in violation of the right to privacy is evident, which pursues in practise to many interpretive doubts. In Germany the jurisprudence has a significant impact within the range of the matter mentioned above. The Constitutional Court and the Supreme Court of Germany protect the right to privacy quite firmly - they ruled on inadmissibility of obtaining a proof in the form of a diary or a journal as a protection measure of constitutional guaranteed right. At the same time, however, the Supreme Court is not very convinced with reference to the issue of whether materials collected as a result of an inspection, call recordings or listening to the premises, which were carried out in breach of law, can be used in a criminal trial. Generally speaking, German courts indicate a crucial importance of the principle of Truth and the principle of proportionality, which both enable a judgement to be made as to the possibility of using an evidence obtained unlawfully. Comparing, in Poland there is almost no jurisprudence of the Constitutional Tribunal relating directly to the issue of illegal evidence. It is somehow surprising, considering the doctrinal analysis of the admissibility of using such proof in a criminal trial is performed in relation to standards resulted from the Constitution. Moreover, a crucial de lega lata legal provision, which enables allowing a proof obtained in infringement of the provisions in respect of criminal proceedings or through a forbidden act, is widely criticised within the legal profession ant therefore many courts give it their own interpretation at odds with legislator’s intentions. The comparison of two civil law legal systems’ standards regarding to the admissibility of an evidence obtained in contravention of the right to privacy in a criminal trial, taking also into account EU legislation and judicature, is the conclusive aim of this article.

Keywords: criminal trial, evidence, Germany, right to privacy, Poland

Procedia PDF Downloads 156
1430 The Impact of the COVID-19 on the Cybercrimes in Hungary and the Possible Solutions for Prevention

Authors: László Schmidt

Abstract:

Technological and digital innovation is constantly and dynamically evolving, which poses an enormous challenge to both lawmaking and law enforcement. To legislation because artificial intelligence permeates many areas of people’s daily lives that the legislator must regulate. it can see how challenging it is to regulate e.g. self-driving cars/taxis/camions etc. Not to mention cryptocurrencies and Chat GPT, the use of which also requires legislative intervention. Artificial intelligence also poses an extraordinary challenge to law enforcement. In criminal cases, police and prosecutors can make great use of AI in investigations, e.g. in forensics, DNA samples, reconstruction, identification, etc. But it can also be of great help in the detection of crimes committed in cyberspace. In the case of cybercrime, on the one hand, it can be viewed as a new type of crime that can only be committed with the help of information systems, and that has a specific protected legal object, such as an information system or data. On the other hand, it also includes traditional crimes that are much easier to commit with the help of new tools. According to Hungarian Criminal Code section 375 (1), any person who, for unlawful financial gain, introduces data into an information system, or alters or deletes data processed therein, or renders data inaccessible, or otherwise interferes with the functioning of the information system, and thereby causes damage, is guilty of a felony punishable by imprisonment not exceeding three years. The Covid-19 coronavirus epidemic has had a significant impact on our lives and our daily lives. It was no different in the world of crime. With people staying at home for months, schools, restaurants, theatres, cinemas closed, and no travel, criminals have had to change their ways. Criminals were committing crimes online in even greater numbers than before. These crimes were very diverse, ranging from false fundraising, the collection and misuse of personal data, extortion to fraud on various online marketplaces. The most vulnerable age groups (minors and elderly) could be made more aware and prevented from becoming victims of this type of crime through targeted programmes. The aim of the study is to show the Hungarian judicial practice in relation to cybercrime and possible preventive solutions.

Keywords: cybercrime, COVID-19, Hungary, criminal law

Procedia PDF Downloads 60
1429 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure

Authors: Sara Saboonian, Pierre Filion

Abstract:

The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.

Keywords: ecosystem services, green infrastructure, intensification, planning

Procedia PDF Downloads 355