Search results for: process approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25004

Search results for: process approach

374 Widely Diversified Macroeconomies in the Super-Long Run Casts a Doubt on Path-Independent Equilibrium Growth Model

Authors: Ichiro Takahashi

Abstract:

One of the major assumptions of mainstream macroeconomics is the path independence of capital stock. This paper challenges this assumption by employing an agent-based approach. The simulation results showed the existence of multiple "quasi-steady state" equilibria of the capital stock, which may cast serious doubt on the validity of the assumption. The finding would give a better understanding of many phenomena that involve hysteresis, including the causes of poverty. The "market-clearing view" has been widely shared among major schools of macroeconomics. They understand that the capital stock, the labor force, and technology, determine the "full-employment" equilibrium growth path and demand/supply shocks can move the economy away from the path only temporarily: the dichotomy between the short-run business cycles and the long-run equilibrium path. The view then implicitly assumes the long-run capital stock to be independent of how the economy has evolved. In contrast, "Old Keynesians" have recognized fluctuations in output as arising largely from fluctuations in real aggregate demand. It will then be an interesting question to ask if an agent-based macroeconomic model, which is known to have path dependence, can generate multiple full-employment equilibrium trajectories of the capital stock in the super-long run. If the answer is yes, the equilibrium level of capital stock, an important supply-side factor, would no longer be independent of the business cycle phenomenon. This paper attempts to answer the above question by using the agent-based macroeconomic model developed by Takahashi and Okada (2010). The model would serve this purpose well because it has neither population growth nor technology progress. The objective of the paper is twofold: (1) to explore the causes of long-term business cycle, and (2) to examine the super-long behaviors of the capital stock of full-employment economies. (1) The simulated behaviors of the key macroeconomic variables such as output, employment, real wages showed widely diversified macro-economies. They were often remarkably stable but exhibited both short-term and long-term fluctuations. The long-term fluctuations occur through the following two adjustments: the quantity and relative cost adjustments of capital stock. The first one is obvious and assumed by many business cycle theorists. The reduced aggregate demand lowers prices, which raises real wages, thereby decreasing the relative cost of capital stock with respect to labor. (2) The long-term business cycles/fluctuations were synthesized with the hysteresis of real wages, interest rates, and investments. In particular, a sequence of the simulation runs with a super-long simulation period generated a wide range of perfectly stable paths, many of which achieved full employment: all the macroeconomic trajectories, including capital stock, output, and employment, were perfectly horizontal over 100,000 periods. Moreover, the full-employment level of capital stock was influenced by the history of unemployment, which was itself path-dependent. Thus, an experience of severe unemployment in the past kept the real wage low, which discouraged a relatively costly investment in capital stock. Meanwhile, a history of good performance sometimes brought about a low capital stock due to a high-interest rate that was consistent with a strong investment.

Keywords: agent-based macroeconomic model, business cycle, hysteresis, stability

Procedia PDF Downloads 186
373 Use of Sewage Sludge Ash as Partial Cement Replacement in the Production of Mortars

Authors: Domagoj Nakic, Drazen Vouk, Nina Stirmer, Mario Siljeg, Ana Baricevic

Abstract:

Wastewater treatment processes generate significant quantities of sewage sludge that need to be adequately treated and disposed. In many EU countries, the problem of adequate disposal of sewage sludge has not been solved, nor is determined by the unique rules, instructions or guidelines. Disposal of sewage sludge is important not only in terms of satisfying the regulations, but the aspect of choosing the optimal wastewater and sludge treatment technology. Among the solutions that seem reasonable, recycling of sewage sludge and its byproducts reaches the top recommendation. Within the framework of sustainable development, recycling of sludge almost completely closes the cycle of wastewater treatment in which only negligible amounts of waste that requires landfilling are being generated. In many EU countries, significant amounts of sewage sludge are incinerated, resulting in a new byproduct in the form of ash. Sewage sludge ash is three to five times less in volume compared to stabilized and dehydrated sludge, but it also requires further management. The combustion process also destroys hazardous organic components in the sludge and minimizes unpleasant odors. The basic objective of the presented research is to explore the possibilities of recycling of the sewage sludge ash as a supplementary cementitious material. This is because of the main oxides present in the sewage sludge ash (SiO2, Al2O3 and Cao, which is similar to cement), so it can be considered as latent hydraulic and pozzolanic material. Physical and chemical characteristics of ashes, generated by sludge collected from different wastewater treatment plants, and incinerated in laboratory conditions at different temperatures, are investigated since it is a prerequisite of its subsequent recycling and the eventual use in other industries. Research was carried out by replacing up to 20% of cement by mass in cement mortar mixes with different obtained ashes and examining characteristics of created mixes in fresh and hardened condition. The mixtures with the highest ash content (20%) showed an average drop in workability of about 15% which is attributed to the increased water requirements when ash was used. Although some mixes containing added ash showed compressive and flexural strengths equivalent to those of reference mixes, generally slight decrease in strength was observed. However, it is important to point out that the compressive strengths always remained above 85% compared to the reference mix, while flexural strengths remained above 75%. Ecological impact of innovative construction products containing sewage sludge ash was determined by analyzing leaching concentrations of heavy metals. Results demonstrate that sewage sludge ash can satisfy technical and environmental criteria for use in cementitious materials which represents a new recycling application for an increasingly important waste material that is normally landfilled. Particular emphasis is placed on linking the composition of generated ashes depending on its origin and applied treatment processes (stage of wastewater treatment, sludge treatment technology, incineration temperature) with the characteristics of the final products. Acknowledgement: This work has been fully supported by Croatian Science Foundation under the project '7927 - Reuse of sewage sludge in concrete industry – from infrastructure to innovative construction products'.

Keywords: cement mortar, recycling, sewage sludge ash, sludge disposal

Procedia PDF Downloads 227
372 Educational Knowledge Transfer in Indigenous Mexican Areas Using Cloud Computing

Authors: L. R. Valencia Pérez, J. M. Peña Aguilar, A. Lamadrid Álvarez, A. Pastrana Palma, H. F. Valencia Pérez, M. Vivanco Vargas

Abstract:

This work proposes a Cooperation-Competitive (Coopetitive) approach that allows coordinated work among the Secretary of Public Education (SEP), the Autonomous University of Querétaro (UAQ) and government funds from National Council for Science and Technology (CONACYT) or some other international organizations. To work on an overall knowledge transfer strategy with e-learning over the Cloud, where experts in junior high and high school education, working in multidisciplinary teams, perform analysis, evaluation, design, production, validation and knowledge transfer at large scale using a Cloud Computing platform. Allowing teachers and students to have all the information required to ensure a homologated nationally knowledge of topics such as mathematics, statistics, chemistry, history, ethics, civism, etc. This work will start with a pilot test in Spanish and initially in two regional dialects Otomí and Náhuatl. Otomí has more than 285,000 speaking indigenes in Queretaro and Mexico´s central region. Náhuatl is number one indigenous dialect spoken in Mexico with more than 1,550,000 indigenes. The phase one of the project takes into account negotiations with indigenous tribes from different regions, and the Information and Communication technologies to deliver the knowledge to the indigenous schools in their native dialect. The methodology includes the following main milestones: Identification of the indigenous areas where Otomí and Náhuatl are the spoken dialects, research with the SEP the location of actual indigenous schools, analysis and inventory or current schools conditions, negotiation with tribe chiefs, analysis of the technological communication requirements to reach the indigenous communities, identification and inventory of local teachers technology knowledge, selection of a pilot topic, analysis of actual student competence with traditional education system, identification of local translators, design of the e-learning platform, design of the multimedia resources and storage strategy for “Cloud Computing”, translation of the topic to both dialects, Indigenous teachers training, pilot test, course release, project follow up, analysis of student requirements for the new technological platform, definition of a new and improved proposal with greater reach in topics and regions. Importance of phase one of the project is multiple, it includes the proposal of a working technological scheme, focusing in the cultural impact in Mexico so that indigenous tribes can improve their knowledge about new forms of crop improvement, home storage technologies, proven home remedies for common diseases, ways of preparing foods containing major nutrients, disclose strengths and weaknesses of each region, communicating through cloud computing platforms offering regional products and opening communication spaces for inter-indigenous cultural exchange.

Keywords: Mexicans indigenous tribes, education, knowledge transfer, cloud computing, otomi, Náhuatl, language

Procedia PDF Downloads 374
371 The Effect of Ionic Liquid Anion Type on the Properties of TiO2 Particles

Authors: Marta Paszkiewicz, Justyna Łuczak, Martyna Marchelek, Adriana Zaleska-Medynska

Abstract:

In recent years, photocatalytical processes have been intensively investigated for destruction of pollutants, hydrogen evolution, disinfection of water, air and surfaces, for the construction of self-cleaning materials (tiles, glass, fibres, etc.). Titanium dioxide (TiO2) is the most popular material used in heterogeneous photocatalysis due to its excellent properties, such as high stability, chemical inertness, non-toxicity and low cost. It is well known that morphology and microstructure of TiO2 significantly influence the photocatalytic activity. This characteristics as well as other physical and structural properties of photocatalysts, i.e., specific surface area or density of crystalline defects, could be controlled by preparation route. In this regard, TiO2 particles can be obtained by sol-gel, hydrothermal, sonochemical methods, chemical vapour deposition and alternatively, by ionothermal synthesis using ionic liquids (ILs). In the TiO2 particles synthesis ILs may play a role of a solvent, soft template, reagent, agent promoting reduction of the precursor or particles stabilizer during synthesis of inorganic materials. In this work, the effect of the ILs anion type on morphology and photoactivity of TiO2 is presented. The preparation of TiO2 microparticles with spherical structure was successfully achieved by solvothermal method, using tetra-tert-butyl orthotitatane (TBOT) as the precursor. The reaction process was assisted by an ionic liquids 1-butyl-3-methylimidazolium bromide [BMIM][Br], 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4] and 1-butyl-3-methylimidazolium haxafluorophosphate [BMIM][PF6]. Various molar ratios of all ILs to TBOT (IL:TBOT) were chosen. For comparison, reference TiO2 was prepared using the same method without IL addition. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Brenauer-Emmett-Teller surface area (BET), NCHS analysis, and FTIR spectroscopy were used to characterize the surface properties of the samples. The photocatalytic activity was investigated by means of phenol photodegradation in the aqueous phase as a model pollutant, as well as formation of hydroxyl radicals based on detection of fluorescent product of coumarine hydroxylation. The analysis results showed that the TiO2 microspheres had spherical structure with the diameters ranging from 1 to 6 µm. The TEM micrographs gave a bright observation of the samples in which the particles were comprised of inter-aggregated crystals. It could be also observed that the IL-assisted TiO2 microspheres are not hollow, which provides additional information about possible formation mechanism. Application of the ILs results in rise of the photocatalytic activity as well as BET surface area of TiO2 as compared to pure TiO2. The results of the formation of 7-hydroxycoumarin indicated that the increased amount of ·OH produced at the surface of excited TiO2 for samples TiO2_ILs well correlated with more efficient degradation of phenol. NCHS analysis showed that ionic liquids remained on the TiO2 surface confirming structure directing role of that compounds.

Keywords: heterogeneous photocatalysis, IL-assisted synthesis, ionic liquids, TiO2

Procedia PDF Downloads 246
370 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank

Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh

Abstract:

Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we  also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.

Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI

Procedia PDF Downloads 93
369 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College

Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa

Abstract:

This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.

Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling

Procedia PDF Downloads 197
368 3D Seismic Acquisition Challenges in the NW Ghadames Basin Libya, an Integrated Geophysical Sedimentological and Subsurface Studies Approach as a Solution

Authors: S. Sharma, Gaballa Aqeelah, Tawfig Alghbaili, Ali Elmessmari

Abstract:

There were abrupt discontinuities in the Brute Stack in the northernmost locations during the acquisition of 2D (2007) and 3D (2021) seismic data in the northwest region of the Ghadames Basin, Libya. In both campaigns, complete fluid circulation loss was seen in these regions during up-hole drilling. Geophysics, sedimentology and shallow subsurface geology were all integrated to look into what was causing the seismic signal to disappear at shallow depths. The Upper Cretaceous Nalut Formation is the near-surface or surface formation in the studied area. It is distinguished by abnormally high resistivity in all the neighboring wells. The Nalut Formation in all the nearby wells from the present study and previous outcrop study suggests lithology of dolomite and chert/flint in nodular or layered forms. There are also reports of karstic caverns, vugs, and thick cracks, which all work together to produce the high resistivity. Four up-hole samples that were analyzed for microfacies revealed a near-coastal to tidal environment. Algal (Chara) infested deposits up to 30 feet thick and monotonous, very porous, are seen in two up-hole sediments; these deposits are interpreted to be scattered, continental algal travertine mounds. Chert/flint, dolomite, and calcite in varying amounts are confirmed by XRD analysis. Regional tracking of the high resistivity of the Nalut Formation, which is thought to be connected to the sea level drop that created the paleokarst layer, is possible. It is abruptly overlain by a blanket marine transgressive deposit caused by rapid sea level rise, which is a regional, relatively high radioactive layer of argillaceous limestone. The examined area's close proximity to the mountainous, E-W trending ridges of northern Libya made it easier for recent freshwater circulation, which later enhanced cavern development and mineralization in the paleokarst layer. Seismic signal loss at shallow depth is caused by extremely heterogeneous mineralogy of pore- filling or lack thereof. Scattering effect of shallow karstic layer on seismic signal has been well documented. Higher velocity inflection points at shallower depths in the northern part and deeper intervals in the southern part, in both cases at Nalut level, demonstrate the layer's influence on the seismic signal. During the Permian-Carboniferous, the Ghadames Basin underwent uplift and extensive erosion, which resulted in this karstic layer of the Nalut Formation uplifted to a shallow depth in the northern part of the studied area weakening the acoustic signal, whereas in the southern part of the 3D acquisition area the Nalut Formation remained at the deeper interval without affecting the seismic signal. Results from actions taken during seismic processing to deal with this signal loss are visible and have improved. This study recommends using denser spacing or dynamite to circumvent the karst layer in a comparable geographic area in order to prevent signal loss at lesser depths.

Keywords: well logging, seismic data acquisition, sesimic data processing, up-holes

Procedia PDF Downloads 53
367 Evaluation of Biological and Confinement Properties of a Bone Substitute to in Situ Preparation Based on Demineralized Bone Matrix for Bone Tissue Regeneration

Authors: Aura Maria Lopera Echavarria, Angela Maria Lema Perez, Daniela Medrano David, Pedronel Araque Marin, Marta Elena Londoño Lopez

Abstract:

Bone regeneration is the process by which the formation of new bone is stimulated. Bone fractures can originate at any time due to trauma, infections, tumors, congenital malformations or skeletal diseases. Currently there are different strategies to treat bone defects that in some cases, regeneration does not occur on its own. That is why they are treated with bone substitutes, which provide a necessary environment for the cells to synthesize new bone. The Demineralized Bone Matrix (DBM) is widely used as a bone implant due to its good properties, such as osteoinduction and bioactivity. However, the use of DBM is limited, because its presentation is powder, which is difficult to implant with precision and is susceptible to migrating to other sites through blood flow. That is why the DBM is commonly incorporated into a variety of vehicles or carriers. The objective of this project is to evaluate the bioactive and confinement properties of a bone substitute based on demineralized bone matrix (DBM). Also, structural and morphological properties were evaluated. Bone substitute was obtained from EIA Biomaterials Laboratory of EIA University and the DBM was facilitated by Tissue Bank Foundation. Morphological and structural properties were evaluated by scanning electron microscopy (SEM), X-ray diffraction (DRX) and Fourier transform infrared spectroscopy with total attenuated reflection (FTIR-ATR). Water absorption capacity and degradation were also evaluated during three months. The cytotoxicity was evaluated by the MTT test. The bioactivity of the bone substitute was evaluated through immersion of the samples in simulated body fluid during four weeks. Confinement tests were performed on tibial fragments of a human donor with bone defects of determined size, to ensure that the substitute remains in the defect despite the continuous flow of fluid. According of the knowledge of the authors, the methodology for evaluating samples in a confined environment has not been evaluated before in real human bones. The morphology of the samples showed irregular surface and presented some porosity. DRX confirmed a semi-crystalline structure. The FTIR-ATR determined the organic and inorganic phase of the sample. The degradation and absorption measurements stablished a loss of 3% and 150% in one month respectively. The MTT showed that the system is not cytotoxic. Apatite clusters formed from the first week were visualized by SEM and confirmed by EDS. These calcium phosphates are necessary to stimulate bone regeneration and thanks to the porosity of the developed material, osteinduction and osteoconduction are possible. The results of the in vitro evaluation of the confinement of the material showed that the migration of the bone filling to other sites is negligible, although the samples were subjected to the passage of simulated body fluid. The bone substitute, putty type, showed stability, is bioactive, non-cytotoxic and has handling properties for specialists at the time of implantation. The obtained system allows to maintain the osteoinductive properties of DBM and it can fill completely fractures in any way; however, it does not provide a structural support, that is, it should only be used to treat fractures without requiring a mechanical load.

Keywords: bone regeneration, cytotoxicity, demineralized bone matrix, hydrogel

Procedia PDF Downloads 94
366 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 73
365 National Core Indicators - Aging and Disabilities: A Person-Centered Approach to Understanding Quality of Long-Term Services and Supports

Authors: Stephanie Giordano, Rosa Plasencia

Abstract:

In the USA, in 2013, public service systems such as Medicaid, aging, and disability systems undertook an effort to measure the quality of service delivery by examining the experiences and outcomes of those receiving public services. The goal of this effort was to develop a survey to measure the experiences and outcomes of those receiving public services, with the goal of measuring system performance for quality improvement. The performance indicators were developed through with input from directors of state aging and disability service systems, along with experts and stakeholders in the field across the United States. This effort, National Core Indicators –Aging and Disabilities (NCI-AD), grew out of National Core Indicators –Intellectual and Developmental Disabilities, an effort to measure developmental disability (DD) systems across the States. The survey tool and administration protocol underwent multiple rounds of testing and revision between 2013 and 2015. The measures in the final tool – called the Adult Consumer Survey (ACS) – emphasize not just important indicators of healthcare access and personal safety but also includes indicators of system quality based on person-centered outcomes. These measures indicate whether service systems support older adults and people with disabilities to live where they want, maintain relationships and engage in their communities and have choice and control in their everyday lives. Launched in 2015, the NCI-AD Adult Consumer Survey is now used in 23 states in the US. Surveys are conducted by NCI-AD trained surveyors via direct conversation with a person receiving public long-term services and supports (LTSS). Until 2020, surveys were only conducted in person. However, after a pilot to test the reliability of videoconference and telephone survey modes, these modes were adopted as an acceptable practice. The nature of the survey is that of a “guided conversation” survey administration allows for surveyor to use wording and terminology that is best understand by the person surveyed. The survey includes a subset of questions that may be answered by a proxy respondent who knows the person well if the person is receiving services in unable to provide valid responses on their own. Surveyors undergo a standardized training on survey administration to ensure the fidelity of survey administration. In addition to the main survey section, a Background Information section collects data on personal and service-related characteristics of the person receiving services; these data are typically collected through state administrative record. This information is helps provide greater context around the characteristics of people receiving services. It has also been used in conjunction with outcomes measures to look at disparity (including by race and ethnicity, gender, disability, and living arrangements). These measures of quality are critical for public service delivery systems to understand the unique needs of the population of older adults and improving the lives of older adults as well as people with disabilities. Participating states may use these data to identify areas for quality improvement within their service delivery systems, to advocate for specific policy change, and to better understand the experiences of specific populations of people served.

Keywords: quality of life, long term services and supports, person-centered practices, aging and disability research, survey methodology

Procedia PDF Downloads 87
364 Ordered Mesoporous Carbons of Different Morphology for Loading and Controlled Release of Active Pharmaceutical Ingredients

Authors: Aleksander Ejsmont, Aleksandra Galarda, Joanna Goscianska

Abstract:

Smart porous carriers with defined structure and physicochemical properties are required for releasing the therapeutic drug with precise control of delivery time and location in the body. Due to their non-toxicity, ordered structure, chemical, and thermal stability, mesoporous carbons can be considered as modern carriers for active pharmaceutical ingredients (APIs) whose effectiveness needs frequent dosing algorithms. Such an API-carrier system, if programmed precisely, may stabilize the pharmaceutical and increase its dissolution leading to enhanced bioavailability. The substance conjugated with the material, through its prior adsorption, can later be successfully applied internally to the organism, as well as externally if the API release is feasible under these conditions. In the present study, ordered mesoporous carbons of different morphologies and structures, prepared by hard template method, were applied as carriers in the adsorption and controlled release of active pharmaceutical ingredients. In the first stage, the carbon materials were synthesized and functionalized with carboxylic groups by chemical oxidation using ammonium persulfate solution and then with amine groups. Materials obtained were thoroughly characterized with respect to morphology (scanning electron microscopy), structure (X-ray diffraction, transmission electron microscopy), characteristic functional groups (FT-IR spectroscopy), acid-base nature of surface groups (Boehm titration), parameters of the porous structure (low-temperature nitrogen adsorption) and thermal stability (TG analysis). This was followed by a series of tests of adsorption and release of paracetamol, benzocaine, and losartan potassium. Drug release experiments were performed in the simulated gastric fluid of pH 1.2 and phosphate buffer of pH 7.2 or 6.8 at 37.0 °C. The XRD patterns in the small-angle range and TEM images revealed that functionalization of mesoporous carbons with carboxylic or amine groups leads to the decreased ordering of their structure. Moreover, the modification caused a considerable reduction of the carbon-specific surface area and pore volume, but it simultaneously resulted in changing their acid-base properties. Mesoporous carbon materials exhibit different morphologies, which affect the host-guest interactions during the adsorption process of active pharmaceutical ingredients. All mesoporous carbons show high adsorption capacity towards drugs. The sorption capacity of materials is mainly affected by BET surface area and the structure/size matching between adsorbent and adsorbate. Selected APIs are linked to the surface of carbon materials mainly by hydrogen bonds, van der Waals forces, and electrostatic interactions. The release behavior of API is highly dependent on the physicochemical properties of mesoporous carbons. The release rate of APIs could be regulated by the introduction of functional groups and by changing the pH of the receptor medium. Acknowledgments—This research was supported by the National Science Centre, Poland (project SONATA-12 no: 2016/23/D/NZ7/01347).

Keywords: ordered mesoporous carbons, sorption capacity, drug delivery, carbon nanocarriers

Procedia PDF Downloads 152
363 Executive Function and Attention Control in Bilingual and Monolingual Children: A Systematic Review

Authors: Zihan Geng, L. Quentin Dixon

Abstract:

It has been proposed that early bilingual experience confers a number of advantages in the development of executive control mechanisms. Although the literature provides empirical evidence for bilingual benefits, some studies also reported null or mixed results. To make sense of these contradictory findings, the current review synthesize recent empirical studies investigating bilingual effects on children’s executive function and attention control. The publication time of the studies included in the review ranges from 2010 to 2017. The key searching terms are bilingual, bilingualism, children, executive control, executive function, and attention. The key terms were combined within each of the following databases: ERIC (EBSCO), Education Source, PsycINFO, and Social Science Citation Index. Studies involving both children and adults were also included but the analysis was based on the data generated only by the children group. The initial search yielded 137 distinct articles. Twenty-eight studies from 27 articles with a total of 3367 participants were finally included based on the selection criteria. The selective studies were then coded in terms of (a) the setting (i.e., the country where the data was collected), (b) the participants (i.e., age and languages), (c) sample size (i.e., the number of children in each group), (d) cognitive outcomes measured, (e) data collection instruments (i.e., cognitive tasks and tests), and (f) statistic analysis models (e.g., t-test, ANOVA). The results show that the majority of the studies were undertaken in western countries, mainly in the U.S., Canada, and the UK. A variety of languages such as Arabic, French, Dutch, Welsh, German, Spanish, Korean, and Cantonese were involved. In relation to cognitive outcomes, the studies examined children’s overall planning and problem-solving abilities, inhibition, cognitive complexity, working memory (WM), and sustained and selective attention. The results indicate that though bilingualism is associated with several cognitive benefits, the advantages seem to be weak, at least, for children. Additionally, the nature of the cognitive measures was found to greatly moderate the results. No significant differences are observed between bilinguals and monolinguals in overall planning and problem-solving ability, indicating that there is no bilingual benefit in the cooperation of executive function components at an early age. In terms of inhibition, the mixed results suggest that bilingual children, especially young children, may have better conceptual inhibition measured in conflict tasks, but not better response inhibition measured by delay tasks. Further, bilingual children showed better inhibitory control to bivalent displays, which resembles the process of maintaining two language systems. The null results were obtained for both cognitive complexity and WM, suggesting no bilingual advantage in these two cognitive components. Finally, findings on children’s attention system associate bilingualism with heightened attention control. Together, these findings support the hypothesis of cognitive benefits for bilingual children. Nevertheless, whether these advantages are observable appears to highly depend on the cognitive assessments. Therefore, future research should be more specific about the cognitive outcomes (e.g., the type of inhibition) and should report the validity of the cognitive measures consistently.

Keywords: attention, bilingual advantage, children, executive function

Procedia PDF Downloads 161
362 Mental Health Promotion for Children of Mentally Ill Parents in Schools. Assessment and Promotion of Teacher Mental Health Literacy in Order to Promote Child Related Mental Health (Teacher-MHL)

Authors: Dirk Bruland, Paulo Pinheiro, Ullrich Bauer

Abstract:

Introduction: Over 3 million children, about one quarter of all students, experience at least one parent with mental disorder in Germany every year. Children of mentally-ill parents are at considerably higher risk of developing serious mental health problems. The different burden patterns and coping attempts often become manifest in children's school lives. In this context, schools can have an important protective function, but can also create risk potentials. In reference to Jorm, pupil-related teachers’ mental health literacy (Teacher-MHL) includes the ability to recognize change behaviour, the knowledge of risk factors, the implementation of first aid intervention, and seeking professional help (teacher as gatekeeper). Although teachers’ knowledge and increased awareness of this topic is essential, the literature provides little information on the extent of teachers' abilities. As part of a German-wide research consortium on health literacy, this project, launched in March for 3 years, will conduct evidence-based mental health literacy research. The primary objective is to measure Teacher-MHL in the context of pupil-related psychosocial factors at primary and secondary schools (grades 5 & 6), while also focussing on children’s social living conditions. Methods: (1) A systematic literature review in different databases to identify papers with regard to Teacher-MHL (completed). (2) Based on these results, an interview guide was developed. This research step includes a qualitative pre-study to inductively survey the general profiles of teachers (n=24). The evaluation will be presented on the conference. (3) These findings will be translated into a quantitative teacher survey (n=2500) in order to assess the extent of socio-analytical skills of teachers as well as in relation to institutional and individual characteristics. (4) Based on results 1 – 3, developing a training program for teachers. Results: The review highlights a lack of information for Teacher-MHL and their skills, especially related to high-risk-groups like children of mentally ill parents. The literature is limited to a few studies only. According to these, teacher are not good at identifying burdened children and if they identify those children they do not know how to handle the situations in school. They are not sufficiently trained to deal with these children, especially there are great uncertainties in dealing with the teaching situation. Institutional means and resources are missing as well. Such a mismatch can result in insufficient support and use of opportunities for children at risk. First impressions from the interviews confirm these results and allow a greater insight in the everyday school-life according to critical life events in families. Conclusions: For the first time schools will be addressed as a setting where children are especially "accessible" for measures of health promotion. Addressing Teacher-MHL gives reason to expect high effectiveness. Targeting professionals' abilities for dealing with this high-risk-group leads to a discharge for teacher themselves to handle those situations and increases school health promotion. In view of the fact that only 10-30% of such high-risk families accept offers of therapy and assistance, this will be the first primary preventive and health-promoting approach to protect the health of a yet unaffected, but particularly burdened, high-risk group.

Keywords: children of mentally ill parents, health promotion, mental health literacy, school

Procedia PDF Downloads 515
361 Development of a Bead Based Fully Automated Mutiplex Tool to Simultaneously Diagnose FIV, FeLV and FIP/FCoV

Authors: Andreas Latz, Daniela Heinz, Fatima Hashemi, Melek Baygül

Abstract:

Introduction: Feline leukemia virus (FeLV), feline immunodeficiency virus (FIV), and feline coronavirus (FCoV) are serious infectious diseases affecting cats worldwide. Transmission of these viruses occurs primarily through close contact with infected cats (via saliva, nasal secretions, faeces, etc.). FeLV, FIV, and FCoV infections can occur in combination and are expressed in similar clinical symptoms. Diagnosis can therefore be challenging: Symptoms are variable and often non-specific. Sick cats show very similar clinical symptoms: apathy, anorexia, fever, immunodeficiency syndrome, anemia, etc. Sample volume for small companion animals for diagnostic purposes can be challenging to collect. In addition, multiplex diagnosis of diseases can contribute to an easier, cheaper, and faster workflow in the lab as well as to the better differential diagnosis of diseases. For this reason, we wanted to develop a new diagnostic tool that utilizes less sample volume, reagents, and consumables than multiplesingleplex ELISA assays Methods: The Multiplier from Dynextechonogies (USA) has been used as platform to develop a Multiplex diagnostic tool for the detection of antibodies against FIV and FCoV/FIP and antigens for FeLV. Multiplex diagnostics. The Dynex®Multiplier®is a fully automated chemiluminescence immunoassay analyzer that significantly simplifies laboratory workflow. The Multiplier®ease-of-use reduces pre-analytical steps by combining the power of efficiently multiplexing multiple assays with the simplicity of automated microplate processing. Plastic beads have been coated with antigens for FIV and FCoV/FIP, as well as antibodies for FeLV. Feline blood samples are incubated with the beads. Read out of results is performed via chemiluminescence Results: Bead coating was optimized for each individual antigen or capture antibody and then combined in the multiplex diagnostic tool. HRP: Antibody conjugates for FIV and FCoV antibodies, as well as detection antibodies for FeLV antigen, have been adjusted and mixed. 3 individual prototyple batches of the assay have been produced. We analyzed for each disease 50 well defined positive and negative samples. Results show an excellent diagnostic performance of the simultaneous detection of antibodies or antigens against these feline diseases in a fully automated system. A 100% concordance with singleplex methods like ELISA or IFA can be observed. Intra- and Inter-Assays showed a high precision of the test with CV values below 10% for each individual bead. Accelerated stability testing indicate a shelf life of at least 1 year. Conclusion: The new tool can be used for multiplex diagnostics of the most important feline infectious diseases. Only a very small sample volume is required. Fully automation results in a very convenient and fast method for diagnosing animal diseases.With its large specimen capacity to process over 576 samples per 8-hours shift and provide up to 3,456 results, very high laboratory productivity and reagent savings can be achieved.

Keywords: Multiplex, FIV, FeLV, FCoV, FIP

Procedia PDF Downloads 75
360 Small Town Big Urban Issues the Case of Kiryat Ono, Israel

Authors: Ruth Shapira

Abstract:

Introduction: The rapid urbanization of the last century confronts planners, regulatory bodies, developers and most of all – the public with seemingly unsolved conflicts regarding values, capital, and wellbeing of the built and un-built urban space. This is reflected in the quality of the urban form and life which has known no significant progress in the last 2-3 decades despite the on-growing urban population. It is the objective of this paper to analyze some of these fundamental issues through the case study of a relatively small town in the center of Israel (Kiryat-Ono, 100,000 inhabitants), unfold the deep structure of qualities versus disruptors, present some cure that we have developed to bridge over and humbly suggest a practice that may be generic for similar cases. Basic Methodologies: The OBJECT, the town of Kiryat Ono, shall be experimented upon in a series of four action processes: De-composition, Re-composition, the Centering process and, finally, Controlled Structural Disintegration. Each stage will be based on facts, analysis of previous multidisciplinary interventions on various layers – and the inevitable reaction of the OBJECT, leading to the conclusion based on innovative theoretical and practical methods that we have developed and that we believe are proper for the open ended network, setting the rules for the contemporary urban society to cluster by. The Study: Kiryat Ono, was founded 70 years ago as an agricultural settlement and rapidly turned into an urban entity. In spite the massive intensification, the original DNA of the old small town was still deeply embedded, mostly in the quality of the public space and in the sense of clustered communities. In the past 20 years, the recent demand for housing has been addressed to on the national level with recent master plans and urban regeneration policies mostly encouraging individual economic initiatives. Unfortunately, due to the obsolete existing planning platform the present urban renewal is characterized by pressure of developers, a dramatic change in building scale and widespread disintegration of the existing urban and social tissue. Our office was commissioned to conceptualize two master plans for the two contradictory processes of Kiryat Ono’s future: intensification and conservation. Following a comprehensive investigation into the deep structures and qualities of the existing town, we developed a new vocabulary of conservation terms thus redefying the sense of PLACE. The main challenge was to create master plans that should offer a regulatory basis to the accelerated and sporadic development providing for the public good and preserving the characteristics of the PLACE consisting of a tool box of design guidelines that will have the ability to reorganize space along the time axis in a coherent way. In Conclusion: The system of rules that we have developed can generate endless possible patterns making sure that at each implementation fragment an event is created, and a better place is revealed. It takes time and perseverance but it seems to be the way to provide a healthy framework for the accelerated urbanization of our chaotic present.

Keywords: housing, architecture, urban qualities, urban regeneration, conservation, intensification

Procedia PDF Downloads 335
359 Identification of Odorant Receptors through the Antennal Transcriptome of the Grapevine Pest, Lobesia botrana (Lepidoptera: Tortricidae)

Authors: Ricardo Godoy, Herbert Venthur, Hector Jimenez, Andres Quiroz, Ana Mutis

Abstract:

In agriculture, grape production has great economic importance at global level, considering that in 2013 it reached 7.4 million hectares (ha) covered by plantations of this fruit worldwide. Chile is the number one exporter in the world with 800,000 tons. However, these values have been threatened by the attack of the grapevine moth, Lobesia botrana (Denis & Schiffermuller) (Lepidoptera: Tortricidae), since its detection in 2008. Nowadays, the use of semiochemicals, in particular the major component of the sex pheromone, (E,Z)-7.9-dodecadienil acetate, are part of mating disruption methods to control L. botrana. How insect pests can recognize these molecules, is being part of huge efforts to deorphanize their olfactory mechanism at molecular level. Thus, an interesting group of proteins has been identified in the antennae of insects, where odorant-binding proteins (OBPs) are known by transporting molecules to odorant receptors (ORs) and a co-receptor (ORCO) causing a behavioral change in the insect. Other proteins such as chemosensory proteins (CSPs), ionotropic receptors (IRs), odorant degrading enzymes (ODEs) and sensory neuron membrane proteins (SNMPs) seem to be involved, but few studies have been performed so far. The above has led to an increasing interest in insect communication at a molecular level, which has contributed to both a better understanding of the olfaction process and the design of new pest management strategies. To date, it has been reported that the ORs can detect one or a small group of odorants in a specific way. Therefore, the objective of this study is the identification of genes that encode these ORs using the antennal transcriptome of L. botrana. Total RNA was extracted for females and males of L. botrana, and the antennal transcriptome sequenced by Next Generation Sequencing service using an Illumina HiSeq2500 platform with 50 million reads per sample. Unigenes were assembled using Trinity v2.4.0 package and transcript abundance was obtained using edgeR. Genes were identified using BLASTN and BLASTX locally installed in a Unix system and based on our own Tortricidae database. Those Unigenes related to ORs were characterized using ORFfinder and protein Blastp server. Finally, a phylogenetic analysis was performed with the candidate amino acid sequences for LbotORs including amino acid sequences of other moths ORs, such as Bombyx mori, Cydia pomonella, among others. Our findings suggest 61 genes encoding ORs and one gene encoding an ORCO in both sexes, where the greatest difference was found in the OR6 because of the transcript abundance according to the value of FPKM in females and males was 1.48 versus 324.00. In addition, according to phylogenetic analysis OR6 is closely related to OR1 in Cydia pomonella and OR6, OR7 in Epiphyas postvittana, which have been described as pheromonal receptors (PRs). These results represent the first evidence of ORs present in the antennae of L. botrana and a suitable starting point for further functional studies with selected ORs, such as OR6, which is potentially related to pheromonal recognition.

Keywords: antennal transcriptome, lobesia botrana, odorant receptors (ORs), phylogenetic analysis

Procedia PDF Downloads 171
358 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 189
357 Benefits of Rainbow School Programmes: Students' and Teachers' Perceptions and Attitudes Towards Gender-Fair Language in Gender-Inclusive Schools

Authors: Teresa Naves, Katy Pallas, Carme Florit, Cristina Anton, Joan Collado, Diana Millan

Abstract:

Although gender-fair language is relatively novel in Spain, in Catalonia, the Department of Education, as well as LGBT Associations, have been promoting several innovative programmes aimed at implementing gender-inclusive schools. These Rainbow School communities are ideal for looking at how these programmes affect the use of gender-fair language and the balanced representation of gender. The students' and teachers' perceptions and attitudes have been compared to those analysed in schools that have never implemented such programmes in primary or secondary education. Spanish and Catalan, unlike English, are gendered languages in which masculine forms have traditionally been used as the unmarked gender and have been claimed to be inclusive of all genders. While the Royal Spanish Academy (RAE) rejects the use of inclusive language and thus deems all variables of inclusion of double gender as unnecessary, the vast majority of universities are promoting not only inclusive language but also gender-inclusive curricula. Adopting gender-fair language policies and including gender perspective in the curricula is an innovative trend at university level and in primary and secondary school education. Inclusion in education is a basic human right and the foundation for a more just and equal society. Educators can facilitate the process of welcoming by ensuring handbooks, forms, and other communications are inclusive of all family structures and gender identities. Using gendered language such as 'girls and boys' can be alienating for gender non-conforming and gender diverse students; on the other hand, non-gendered words like 'students' are regarded as inclusive of all identities. The paper discusses the results of mixed method research (survey, interviews, and experiment) conducted in Rainbow and non-Rainbow schools in Alacant and Barcelona (Spain). The experiment aimed at checking the role of gender-fair language in learners' perception of gender balance. It was conducted in Spanish, Catalan, and English. Students aged 10 to 16 (N > 600) were asked to draw pictures of people using specific prompts. The prompts in Spanish and Catalan were written using the generic masculine, 'los presidentes' 'els presidents' (presidents); using double gendered language such as 'ninos y ninas', 'nens i nenes' (boys and girls); and using non-gendered words like 'alumnado' 'alumnat' (students). The prompts were subdivided into people in school contexts participants could identify with, such as students and teachers; occupations mostly associated with men, such as pilots and firefighters; and occupations associated with women, such as ballet dancers and nurses. As could be expected, the participants only drew approximately the same percentage of female and male characters when double-gendered language or non-gendered words such as 'students' or 'teachers' were used, regardless of the language used in the experiment. When they were asked to draw people using the so-called generic masculine in Spanish or Catalan, 'los estudiantes' 'els estudiants' (students), less than 35% of the drawings contained female characters. The differences between the results for Rainbow and Non-Rainbow schools will be discussed in the light of the innovative coeducation programmes and learners' perceptions on gender-fair language gathered in the surveys and interviews.

Keywords: gender-fair language, gender-inclusive schools, learners’ and teachers’ perceptions and attitudes, rainbow coeducation programmes

Procedia PDF Downloads 100
356 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 131
355 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 79
354 Gandhi and the Judicial Discourse on Moral Rights

Authors: Sunayana Basu Mallik, Shishira Prakash

Abstract:

The inclusion of Rights of Author (Moral and Personal Rights) resonate the century long battle of rights of authors, composers, performers across developed and developing countries (whether following civil law or common law systems). But, the juxtaposition of author’s special, moral, personal rights within the legislative framework of Copyright statutes (Indian Copyright Act, 1957, applicable statutes) underscores the foundational role of the right which goes to the root of the constitutional structure of India and philosophies of political and literary leaders like Mahatma Gandhi and Gurudeb Rabindranath Tagore. In the pre-independence era when the concept of moral rights was unknown to both England and India’s statutory laws, the strategic deployment method of Gandhi, his ideologies and thoughts scripted the concept of moral rights for authors/composers. The preservation of Rabindric Style (Characteristic Tagore’s vocal renditions) by Vishwabharati University (successor in interest for Tagore’s literary and musical compositions) prior to the Copyright Amendment of 1999 recognizing Author’s Special Rights in line with 6bis of Berne Convention invigorates the fact that the right existed intrinsically prior to the legislative amendment. The paper would in addition to the academic probe carry out an empirical enquiry of the institution’s (Navjivan Trust and Vishwa Bharati University’s) reasoning on the same. The judicial discourse and transforming constitutional ideals between 1950s till date in India alludes Moral Rights to be an essential legal right which have been reasoned by Indian Courts based on the underlying philosophies in culture, customs, religion wherein composers and literary figures have played key roles in enlightening and encouraging the members of society through their literary, musical and artistic work during pre-independence renaissance of India. The discourses have been influenced by the philosophies reflected in the preamble of the Indian constitution, ‘socialist, secular, democratic republic’ and laws of other civil law countries. Lastly, the paper would analyze the adjudication process and witness involvement in ascertaining violations of moral rights and further summarize the indigenous and country specific economic thoughts that often chisel decisions on moral rights of authors, composers, performers which sometimes intersect with author’s right of privacy and against defamation. The exclusivity contracts or other arrangements between authors, composers and publishing companies not only have an erosive effect on each thread of moral rights but irreparably dents factors that promote creativity. The paper would also be review these arrangements in view of the principles of unjust enrichment, unfair trade practices, anti-competitive behavior and breach of Section 27 (Restrain of Trade) of Indian Contract Act, 1857. The paper will thus lay down the three pillars on which author’s rights in India should namely rest, (a) political and judicial discourse evolving principles supporting moral rights of authors; (b) amendment and insertion of Section 57 of the Copyright Act, 1957; (c) overall constitutional framework supporting author’s rights.

Keywords: copyright, moral rights, performer’s rights, personal rights

Procedia PDF Downloads 172
353 Prevalence of Antibiotic-Resistant Bacteria Isolated from Fresh Vegetables Retailed in Eastern Spain

Authors: Miguel García-Ferrús, Yolanda Domínguez, M Angeles Castillo, M Antonia Ferrús, Ana Jiménez-Belenguer

Abstract:

Antibiotic resistance is a growing public health concern worldwide, and it is now regarded as a critical issue within the "One Health" approach that affects human and animal health, agriculture, and environmental waste management. This concept focuses on the interconnected nature of human, animal and environmental health, and WHO highlights zoonotic diseases, food safety, and antimicrobial resistance as three particularly relevant areas for this framework. Fresh vegetables are garnering attention in the food chain due to the presence of pathogens and because they can act as a reservoir for Antibiotic Resistance Bacteria (ARB) and Antibiotic Resistance Genes (ARG). These fresh products are frequently consumed raw, thereby contributing to the spread and transmission of antibiotic resistance. Therefore, the aim of this research was to study the microbiological quality, the prevalence of ARB, and their role in the dissemination of ARG in fresh vegetables intended for human consumption. For this purpose, 102 samples of fresh vegetables (30 lettuce, 30 cabbage, 18 strawberries and 24 spinach) from different retail establishments in Valencia (Spain) have been analyzed to determine their microbiological quality and their role in spreading ARB and ARG. The samples were collected and examined according to standardized methods for total viable bacteria, coliforms, Shiga toxin-producing Escherichia coli (STEC), Listeria monocytogenes and Salmonella spp. Isolation was made in culture media supplemented with antibiotics (cefotaxime and meropenem). A total of 239 strains resistant to beta-lactam antibiotics (Third-Generation Cephalosporins and Carbapenems) were isolated. Thirty Gram-negative isolates were selected and biochemically identified or partial sequencing of 16S rDNA. Their sensitivity to 12 antibiotic discs was determined using the Kirby-Bauer disc diffusion technique to different therapeutic groups. To determine the presence of ARG, PCR assays for the direct sample and selected isolate DNA were performed for main expanded spectrum beta-lactamase (ESBL)-, carbapenemase-encoding genes and plasmid-mediated quinolone resistance genes. From the total samples, 68% (24/24 spinach, 28/30 lettuce and 17/30 cabbage) showed total viable bacteria levels over the accepted standard 10(2)-10(5) cfu/g range; and 48% (24/24 spinach, 19/30 lettuce and 6/30) showed coliforms levels over the accepted standard 10(2)-10(4) cfu/g range. In 9 samples (3/24 spinach, 3/30 lettuce, 3/30 cabbage; 9/102 (9%)) E. coli levels were higher than the standard 10(3) cfu/g limit. Listeria monocytogenes, Salmonella and STEC have not been detected. Six different bacteria species were isolated from samples. Stenotrophomonas maltophilia (64%) was the prevalent species, followed by Acinetobacter pitii (14%) and Burkholderia cepacia (7%). All the isolates were resistant to at least one tested antibiotic, including meropenem (85%) and ceftazidime (46%). Of the total isolates, 86% were multidrug-resistant and 68% were ESBL productors. Results of PCR showed the presence of resistance genes to beta-lactams blaTEM (4%) and blaCMY-2 (4%), to carbapenemes blaOXA-48 (25%), blaVIM (7%), blaIMP (21%) and blaKPC (32%), and to quinolones QnrA (7%), QnrB (11%) and QnrS (18%). Thus, fresh vegetables harboring ARB and ARG constitute a potential risk to consumers. Further studies must be done to detect ARG and how they propagate in non-medical environments.

Keywords: ESBL, β-lactams, resistances, fresh vegetables.

Procedia PDF Downloads 43
352 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.

Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin

Procedia PDF Downloads 36
351 Development of the Drug Abuse Health Information System in Thai Community

Authors: Waraporn Boonchieng, Ekkarat Boonchieng, Sivaporn Aungwattana, Decha Tamdee, Wongamporn Pinyavong

Abstract:

Drug addiction represents one of the most important public health issues in both developed and developing countries. The purpose of this study was to develop a drug abuse health information in a community in Northern Thailand using developmental research design. The developmental researchers performed four phases to develop drug abuse health information, including 1) synthesizing knowledge related to drug abuse prevention and identifying the components of drug abuse health information; 2) developing the system in mobile application and website; 3) implementing drug abuse health information in the rural community; and 4) evaluating the feasibility of drug abuse health information. Data collection involved both qualitative and quantitative procedures. The qualitative data and quantitative data were analyzed using content analysis and descriptive statistics, respectively. The findings of this study showed that drug abuse health information consisted of five sections, including drug-related prevention knowledge for teens, drug-related knowledge for adults and professionals, the database for drug dependence treatment centers, self-administered questionnaires, and supportive counseling sections. First, in drug-related prevention knowledge for teens, the developmental researchers designed four infographics and animation to provide drug-related prevention knowledge, including types of illegal drugs, causes of drug abuse, consequences of drug abuse, drug abuse diagnosis and treatment, and drug abuse prevention. Second, in drug-related knowledge for adults and professionals, the developmental researchers developed many documents in a form of PDF file to provide drug-related knowledge, including types of illegal drugs, causes of drug abuse, drug abuse prevention, and relapse prevention guideline. Third, database for drug dependence treatment centers included the place, direction map, operation time, and the way for contacting all drug dependence treatment centers in Thailand. Fourth, self-administered questionnaires comprised preventive drugs behavior questionnaire, drug abuse knowledge questionnaire, the stages of change readiness and treatment eagerness to drug use scale, substance use behaviors questionnaire, tobacco use behaviors questionnaire, stress screening, and depression screening. Finally, for supportive counseling, the developmental researchers designed chatting box through which each user could write and send their concerns to counselors individually. Results from evaluation process showed that 651 participants used drug abuse health information via mobile application and website. Among all users, 48.8% were males and 51.2% were females. More than half (55.3%) were 15-20 years old and most of them (88.0%) were Buddhists. Most users reported ever getting knowledge related to drugs (86.1%), and drinking alcohol (94.2%) while some of them (6.9%) reported ever using tobacco. For satisfaction with using the drug abuse health information, more than half of users reflected that the contents of drug abuse health information were interesting (59%), up-to date (61%), and highly useful to their self-study (59%) at high level. In addition, half of them were satisfied with the design in terms of infographics (54%) and animation (51%). Thus, this drug abuse health information can be adopted to explore drug abuse situation and serves as a tool to prevent drug abuse and addiction among Thai community people.

Keywords: drug addiction, health informatics, big data, development research

Procedia PDF Downloads 87
350 Interpretation of Time Series Groundwater Monitoring Data Using Analytical Impulse Response Function Method to Understand Groundwater Processes Along the Murray River Floodplain at Gunbower Forest, Victoria, Australia

Authors: Mark Hocking

Abstract:

There is concern about the potential impact environmental flooding may have on groundwater levels and salinity processes in the Murray-Darling Basin. A study was undertaken to determine if environmental flooding of the Gunbower Forest has an impact on groundwater level and salinity which is in Victoria, Australia. To assess the impact, Impulse Response Functions (IRFs) are applied to time series groundwater monitoring well data in the area surrounding Gunbower Forest. It is found that rainfall is the primary driver of seasonal water table fluctuation, and the Murray River water level is a secondary contributor to the water table fluctuations. The dominant process that influenced the long-term water table level and salinity conditions is associated with pressure changes in the deep regional aquifer. The study demonstrates that groundwater level fluctuations in the vicinity of Gunbower Forest do not correlate with flooding (natural or managed). Groundwater recharge is calculated by applying the bore hydrograph method to the rainfall-attributed forcing function fluctuations. Data collected from thirty-three bores between 1990 to 2020 is processed to determine a 30-year average groundwater recharge rate. A 5% specific yield of the unconfined aquifer is assumed based on previously published data. It is found that the rainfall-attributed mean annual groundwater recharge varied between 2 mm/year and 189 mm/year with a median of 33.6 mm/year. Surface water recharge is also calculated by analysing the surface water attributed forcing function fluctuations and found to be as high as 37 mm/year, with most of the high values in the vicinity of rivers or agricultural land. There is a long-term regional aquifer declining trend where most water table bores have an average falling trend of 20 cm/year independent of rainfall over the past 30 years. It is found that the groundwater level beneath the Gunbower Forest is dominated by groundwater evapotranspiration. Evapotranspiration lowers the water table by as much as 0.5 m within the forest, thereby causing a relative groundwater level depression under the Gunbower Forest. Historical data shows that groundwater salinity in the area varies and has an electrical conductivity of up to 45 000 µS/cm (comparable to seawater). High groundwater salinity occurs both within and outside the Gunbower Forest as well as adjacent to the Murray River. Available groundwater salinity data suggests trends are generally stable; however, data quality and collection frequency could be improved. This study shows that at the majority of locations analyzed, the groundwater recharge occurred due to both rainfall and water loss from the Murray River. It is found that Deep groundwater pressures determined the base groundwater level, and the fluctuation of the deeper aquifer pressures determined the environmental interaction at the water surface. Local groundwater processes, such as high evapotranspiration rates in Gunbower Forest, have the capacity to lower the water table locally. The rise or fall of the regional aquifer water level has the greatest influence on the groundwater salinity in and around Gunbower Forest.

Keywords: groundwater data interpretation, groundwater monitoring, hydrogeology, impulse response function

Procedia PDF Downloads 30
349 Genetic Polymorphism and Insilico Study Epitope Block 2 MSP1 Gene of Plasmodium falciparum Isolate Endemic Jayapura

Authors: Arsyam Mawardi, Sony Suhandono, Azzania Fibriani, Fifi Fitriyah Masduki

Abstract:

Malaria is an infectious disease caused by Plasmodium sp. This disease has a high prevalence in Indonesia, especially in Jayapura. The vaccine that is currently being developed has not been effective in overcoming malaria. This is due to the high polymorphism in the Plasmodium genome especially in areas that encode Plasmodium surface proteins. Merozoite Surface Protein 1 (MSP1) Plasmodium falciparum is a surface protein that plays a role in the invasion process in human erythrocytes through the interaction of Glycophorin A protein receptors and sialic acid in erythrocytes with Reticulocyte Binding Proteins (RBP) and Duffy Adhesion Protein (DAP) ligands in merozoites. MSP1 can be targeted to be a specific antigen and predicted epitope area which will be used for the development of diagnostic and malaria vaccine therapy. MSP1 consists of 17 blocks, each block is dimorphic, and has been marked as the K1 and MAD20 alleles. Exceptions only in block 2, because it has 3 alleles, among others K1, MAD20 and RO33. These polymorphisms cause allelic variations and implicate the severity of patients infected P. falciparum. In addition, polymorphism of MSP1 in Jayapura isolates has not been reported so it is interesting to be further identified and projected as a specific antigen. Therefore, in this study, we analyzed the allele polymorphism as well as detected the MSP1 epitope antigen candidate on block 2 P. falciparum. Clinical samples of selected malaria patients followed the consecutive sampling method, examining malaria parasites with blood preparations on glass objects observed through a microscope. Plasmodium DNA was isolated from the blood of malarial positive patients. The block 2 MSP1 gene was amplified using PCR method and cloned using the pGEM-T easy vector then transformed to TOP'10 E.coli. Positive colonies selection was performed with blue-white screening. The existence of target DNA was confirmed by PCR colonies and DNA sequencing methods. Furthermore, DNA sequence analysis was done through alignment and formation of a phylogenetic tree using MEGA 6 software and insilico analysis using IEDB software to predict epitope candidate for P. falciparum. A total of 15 patient samples have been isolated from Plasmodium DNA. PCR amplification results show the target gene size about ± 1049 bp. The results of MSP1 nucleotide alignment analysis reveal that block 2 MSP1 genes derived from the sample of malarial patients were distributed in four different allele family groups, K1 (7), MAD20 (1), RO33 (0) and MSP1_Jayapura (10) alleles. The most commonly appears of the detected allele is MSP1_Jayapura single allele. There was no significant association between sex variables, age, the density of parasitemia and alel variation (Mann Whitney, U > 0.05), while symptomatic signs have a significant difference as a trigger of detectable allele variation (U < 0.05). In this research, insilico study shows that there is a new epitope antigen candidate from the MSP1_Jayapura allele and it is predicted to be recognized by B cells with 17 amino acid lengths in the amino acid sequence 187 to 203.

Keywords: epitope candidate, insilico analysis, MSP1 P. falciparum, polymorphism

Procedia PDF Downloads 158
348 Improving the Budget Distribution Procedure to Ensure Smooth and Efficient Public Service Delivery

Authors: Rizwana Tabassum

Abstract:

Introductive Statement: Delay in budget releases is often cited as one of the biggest bottlenecks to smooth and efficient service delivery. While budget release from the ministry of finance to the line ministries has been expedited by simplifying the procedure, budget distribution within the line ministries remains one of the major causes of slow budget utilization. While the budget preparation is a bottom-up process where all DDOs submit their proposals to their controlling officers (such as Upazila Civil Surgeon sends it to Director General Health), who consolidate the budget proposals in iBAS++ budget preparation module, the approved budget is not disaggregated by all DDOs. Instead, it is left to the discretion of the controlling officers to distribute the approved budget to their sub-ordinate offices over the course of the year. Though there are some need-based criteria/formulae to distribute the approved budget among DDOs in some sectors, there is little evidence that these criteria are actually used. This means that majority of the DDOs don’t know their yearly allocations upfront to enable yearly planning of activities and expenditures. This delays the implementation of critical activities and the payment to the suppliers of goods and services and sometimes leads to undocumented arrears to suppliers for essential goods/services. In addition, social sector budgets are fragmented because of the vertical programs and externally financed interventions that pose several management challenges at the level of the budget holders and frontline service providers. Slow procurement processes further delay the provision of necessary goods and services. For example, it takes an average of 15–18 months for drugs to reach the Upazila Health Complex and below, while it should not take more than 9 months in procuring and distributing these. Aim of the Study: This paper aims to investigate the budget distribution practices of an emerging economy, Bangladesh. The paper identifies challenges of timely distribution and ways to deal with problems as well. Methodology: The study draws conclusions on the basis of document analysis which is a branch of the qualitative research method. Major Findings: Upon approval of the National Budget, the Ministry of Finance is required to distribute the budget to budget holders at the department level; however, budget is distributed to drawing and disbursing officers much later. Conclusions: Timely and predictable budget releases assist completion of development schemes on time and on budget, with sufficient recurrent resources for effective operation. ADP implementation is usually very low at the beginning of the fiscal year and expedited dramatically during the last few months, leading to inefficient use of resources. The timely budget release will resolve this issue and deliver economic benefits faster, better, and more reliably. This will also give the project directors/DDOs the freedom to think and plan the budget execution in a predictable manner, thereby ensuring value for money by reducing time overrun and expediting the completion of capital investments, and improving infrastructure utilization through timely payment of recurrent costs.

Keywords: budget distribution, challenges, digitization, emerging economy, service delivery

Procedia PDF Downloads 56
347 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand

Authors: Wen Liu, Errol Haarhoff, Lee Beattie

Abstract:

In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.

Keywords: urban intensification, sustainable development, plan making, governance and implementation

Procedia PDF Downloads 527
346 Raman Spectral Fingerprints of Healthy and Cancerous Human Colorectal Tissues

Authors: Maria Karnachoriti, Ellas Spyratou, Dimitrios Lykidis, Maria Lambropoulou, Yiannis S. Raptis, Ioannis Seimenis, Efstathios P. Efstathopoulos, Athanassios G. Kontos

Abstract:

Colorectal cancer is the third most common cancer diagnosed in Europe, according to the latest incidence data provided by the World Health Organization (WHO), and early diagnosis has proved to be the key in reducing cancer-related mortality. In cases where surgical interventions are required for cancer treatment, the accurate discrimination between healthy and cancerous tissues is critical for the postoperative care of the patient. The current study focuses on the ex vivo handling of surgically excised colorectal specimens and the acquisition of their spectral fingerprints using Raman spectroscopy. Acquired data were analyzed in an effort to discriminate, in microscopic scale, between healthy and malignant margins. Raman spectroscopy is a spectroscopic technique with high detection sensitivity and spatial resolution of few micrometers. The spectral fingerprint which is produced during laser-tissue interaction is unique and characterizes the biostructure and its inflammatory or cancer state. Numerous published studies have demonstrated the potential of the technique as a tool for the discrimination between healthy and malignant tissues/cells either ex vivo or in vivo. However, the handling of the excised human specimens and the Raman measurement conditions remain challenging, unavoidably affecting measurement reliability and repeatability, as well as the technique’s overall accuracy and sensitivity. Therefore, tissue handling has to be optimized and standardized to ensure preservation of cell integrity and hydration level. Various strategies have been implemented in the past, including the use of balanced salt solutions, small humidifiers or pump-reservoir-pipette systems. In the current study, human colorectal specimens of 10X5 mm were collected from 5 patients up to now who underwent open surgery for colorectal cancer. A novel, non-toxic zinc-based fixative (Z7) was used for tissue preservation. Z7 demonstrates excellent protein preservation and protection against tissue autolysis. Micro-Raman spectra were recorded with a Renishaw Invia spectrometer from successive random 2 micrometers spots upon excitation at 785 nm to decrease fluorescent background and secure avoidance of tissue photodegradation. A temperature-controlled approach was adopted to stabilize the tissue at 2 °C, thus minimizing dehydration effects and consequent focus drift during measurement. A broad spectral range, 500-3200 cm-1,was covered with five consecutive full scans that lasted for 20 minutes in total. The average spectra were used for least square fitting analysis of the Raman modes.Subtle Raman differences were observed between normal and cancerous colorectal tissues mainly in the intensities of the 1556 cm-1 and 1628 cm-1 Raman modes which correspond to v(C=C) vibrations in porphyrins, as well as in the range of 2800-3000 cm-1 due to CH2 stretching of lipids and CH3 stretching of proteins. Raman spectra evaluation was supported by histological findings from twin specimens. This study demonstrates that Raman spectroscopy may constitute a promising tool for real-time verification of clear margins in colorectal cancer open surgery.

Keywords: colorectal cancer, Raman spectroscopy, malignant margins, spectral fingerprints

Procedia PDF Downloads 69
345 Preparedness of Health System in Providing Continuous Health Care: A Case Study From Sri Lanka

Authors: Samantha Ramachandra, Avanthi Rupasinghe

Abstract:

Demographic transition from lower to higher percentage of elderly population eventually coupled with epidemiological transition from communicable to non-communicable diseases (NCD). Higher percentage of NCD overload the health system as NCD survivors claims continuous health care. The demands are challenging to a resource constrained setting but reorganizing the system may find solutions. The study focused on the facilities available and their utilization at outpatient department (OPD) setting of the public hospitals of Sri Lanka for continuous medical care. This will help in identifying steps of reorganizing the system to provide better care with the maximum utilization of available facilities. The study was conducted as a situation analysis with secondary data at hospital planning units. Variable were identified according to the world health organization (WHO) recommendation on continuous health care for elders in “age-friendly primary health care toolkit”. Data were collected from secondary and tertiary care hospitals of Sri Lanka where most of the continuous care services are available. Out of 58 secondary and tertiary care hospitals, 16 were included in the study to represent each hospital categories. Average number of patient attending for episodic treatment at OPD and Clinical follow-up of chronic conditions shows vast disparity according to the category of the hospital ranging from 3750 – 800 per day at OPD and 1250 – 200 per clinic session. Average time spent per person at OPD session is low, range from 1.54 - 2.28 minutes, the time was increasing as the hospital category goes down. 93.7% hospitals had special arrangements for providing acute care on chronic conditions such as catheter, feeding tube and wound care. 25% hospitals had special clinics for elders, 81.2% hospitals had healthy lifestyle clinics (HLC), 75% hospitals had physical rehabilitation facilities and 68.8% hospitals had facilities for counselling. Elderly clinics and HLC were mostly available at lower grade hospitals where as rehabilitation and counselling facilities were mostly available at bigger hospitals. HLC are providing health education for both patients and their family members, refer patients for screening of complication but not provide medical examinations, investigations or treatments even though they operate in the hospital setting. Physical rehabilitation is basically offered for patients with rheumatological conditions but utilization of centers for injury rehabilitation and rehabilitation of survivors following major illness such as myocardial infarctions, stroke, cancer is not satisfactory (12.5%). Human Resource distribution within hospital shows vast disparity and there are 103 physiotherapists in the biggest hospital where only 36 physiotherapists available at the next level hospital. Counselling facilities also provided mainly for the patient with psychological conditions (100%) but they were not providing counselling for newly diagnosed patients with major illnesses (0%). According to results, most of the public-sector hospitals in Sri Lanka have basic facilities required in providing continuous care but the utilization of services need more focus. Hospital administration or the government need to have initial steps in proper utilization of them in improving continuous health care incorporating team approach of rehabilitation. The author wishes to acknowledge that this paper was made possible by the support and guidance given by the “Australia Awards Fellowships Program for Sri Lanka – 2017,” which was funded by the Department of Foreign Affairs and Trade, Australia, and co-hosted by Monash University, Australia and the Sri Lanka Institute of Development Administration.

Keywords: continuous care, outpatient department, non communicable diseases, rehabilitation

Procedia PDF Downloads 137