Search results for: affective states
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3241

Search results for: affective states

91 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers

Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang

Abstract:

In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.

Keywords: centrality, patent coupling network, patent influence, social network analysis

Procedia PDF Downloads 54
90 Mathematics Professional Development: Uptake and Impacts on Classroom Practice

Authors: Karen Koellner, Nanette Seago, Jennifer Jacobs, Helen Garnier

Abstract:

Although studies of teacher professional development (PD) are prevalent, surprisingly most have only produced incremental shifts in teachers’ learning and their impact on students. There is a critical need to understand what teachers take up and use in their classroom practice after attending PD and why we often do not see greater changes in learning and practice. This paper is based on a mixed methods efficacy study of the Learning and Teaching Geometry (LTG) video-based mathematics professional development materials. The extent to which the materials produce a beneficial impact on teachers’ mathematics knowledge, classroom practices, and their students’ knowledge in the domain of geometry through a group-randomized experimental design are considered. Included is a close-up examination of a small group of teachers to better understand their interpretations of the workshops and their classroom uptake. The participants included 103 secondary mathematics teachers serving grades 6-12 from two US states in different regions. Randomization was conducted at the school level, with 23 schools and 49 teachers assigned to the treatment group and 18 schools and 54 teachers assigned to the comparison group. The case study examination included twelve treatment teachers. PD workshops for treatment teachers began in Summer 2016. Nine full days of professional development were offered to teachers, beginning with the one-week institute (Summer 2016) and four days of PD throughout the academic year. The same facilitator-led all of the workshops, after completing a facilitator preparation process that included a multi-faceted assessment of fidelity. The overall impact of the LTG PD program was assessed from multiple sources: two teacher content assessments, two PD embedded assessments, pre-post-post videotaped classroom observations, and student assessments. Additional data were collected from the case study teachers including additional videotaped classroom observations and interviews. Repeated measures ANOVA analyses were used to detect patterns of change in the treatment teachers’ content knowledge before and after completion of the LTG PD, relative to the comparison group. No significant effects were found across the two groups of teachers on the two teacher content assessments. Teachers were rated on the quality of their mathematics instruction captured in videotaped classroom observations using the Math in Common Observation Protocol. On average, teachers who attended the LTG PD intervention improved their ability to engage students in mathematical reasoning and to provide accurate, coherent, and well-justified mathematical content. In addition, the LTG PD intervention and instruction that engaged students in mathematical practices both positively and significantly predicted greater student knowledge gains. Teacher knowledge was not a significant predictor. Twelve treatment teachers self-selected to serve as case study teachers to provide additional videotapes in which they felt they were using something from the PD they learned and experienced. Project staff analyzed the videos, compared them to previous videos and interviewed the teachers regarding their uptake of the PD related to content knowledge, pedagogical knowledge and resources used. The full paper will include the case study of Ana to illustrate the factors involved in what teachers take up and use from participating in the LTG PD.

Keywords: geometry, mathematics professional development, pedagogical content knowledge, teacher learning

Procedia PDF Downloads 125
89 EEG and DC-Potential Level Сhanges in the Elderly

Authors: Irina Deputat, Anatoly Gribanov, Yuliya Dzhos, Alexandra Nekhoroshkova, Tatyana Yemelianova, Irina Bolshevidtseva, Irina Deryabina, Yana Kereush, Larisa Startseva, Tatyana Bagretsova, Irina Ikonnikova

Abstract:

In the modern world the number of elderly people increases. Preservation of functionality of an organism in the elderly becomes very important now. During aging the higher cortical functions such as feelings, perception, attention, memory, and ideation are gradual decrease. It is expressed in the rate of information processing reduction, volume of random access memory loss, ability to training and storing of new information decrease. Perspective directions in studying of aging neurophysiological parameters are brain imaging: computer electroencephalography, neuroenergy mapping of a brain, and also methods of studying of a neurodynamic brain processes. Research aim – to study features of a brain aging in elderly people by electroencephalogram (EEG) and the DC-potential level. We examined 130 people aged 55 - 74 years that did not have psychiatric disorders and chronic states in a decompensation stage. EEG was recorded with a 128-channel GES-300 system (USA). EEG recordings are collected while the participant sits at rest with their eyes closed for 3 minutes. For a quantitative assessment of EEG we used the spectral analysis. The range was analyzed on delta (0,5–3,5 Hz), a theta - (3,5–7,0 Hz), an alpha 1-(7,0–11,0 Hz) an alpha 2-(11–13,0 Hz), beta1-(13–16,5 Hz) and beta2-(16,5–20 Hz) ranges. In each frequency range spectral power was estimated. The 12-channel hardware-software diagnostic ‘Neuroenergometr-KM’ complex was applied for registration, processing and the analysis of a brain constant potentials level. The DC-potential level registered in monopolar leads. It is revealed that the EEG of elderly people differ in higher rates of spectral power in the range delta (р < 0,01) and a theta - (р < 0,05) rhythms, especially in frontal areas in aging. By results of the comparative analysis it is noted that elderly people 60-64 aged differ in higher values of spectral power alfa-2 range in the left frontal and central areas (р < 0,05) and also higher values beta-1 range in frontal and parieto-occipital areas (р < 0,05). Study of a brain constant potential level distribution revealed increase of total energy consumption on the main areas of a brain. In frontal leads we registered the lowest values of constant potential level. Perhaps it indicates decrease in an energy metabolism in this area and difficulties of executive functions. The comparative analysis of a potential difference on the main assignments testifies to unevenness of a lateralization of a brain functions at elderly people. The results of a potential difference between right and left hemispheres testify to prevalence of the left hemisphere activity. Thus, higher rates of functional activity of a cerebral cortex are peculiar to people of early advanced age (60-64 years) that points to higher reserve opportunities of central nervous system. By 70 years there are age changes of a cerebral power exchange and level of electrogenesis of a brain which reflect deterioration of a condition of homeostatic mechanisms of self-control and the program of processing of the perceptual data current flow.

Keywords: brain, DC-potential level, EEG, elderly people

Procedia PDF Downloads 486
88 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 63
87 Analysis of Elastic-Plastic Deformation of Reinforced Concrete Shear-Wall Structures under Earthquake Excitations

Authors: Oleg Kabantsev, Karomatullo Umarov

Abstract:

The engineering analysis of earthquake consequences demonstrates a significantly different level of damage to load-bearing systems of different types. Buildings with reinforced concrete columns and separate shear-walls receive the highest level of damage. Traditional methods for predicting damage under earthquake excitations do not provide an answer to the question about the reasons for the increased vulnerability of reinforced concrete frames with shear-walls bearing systems. Thus, the study of the problem of formation and accumulation of damages in the structures reinforced concrete frame with shear-walls requires the use of new methods of assessment of the stress-strain state, as well as new approaches to the calculation of the distribution of forces and stresses in the load-bearing system based on account of various mechanisms of elastic-plastic deformation of reinforced concrete columns and walls. The results of research into the processes of non-linear deformation of structures with a transition to destruction (collapse) will allow to substantiate the characteristics of limit states of various structures forming an earthquake-resistant load-bearing system. The research of elastic-plastic deformation processes of reinforced concrete structures of frames with shear-walls is carried out on the basis of experimentally established parameters of limit deformations of concrete and reinforcement under dynamic excitations. Limit values of deformations are defined for conditions under which local damages of the maximum permissible level are formed in constructions. The research is performed by numerical methods using ETABS software. The research results indicate that under earthquake excitations, plastic deformations of various levels are formed in various groups of elements of the frame with the shear-wall load-bearing system. During the main period of seismic effects in the shear-wall elements of the load-bearing system, there are insignificant volumes of plastic deformations, which are significantly lower than the permissible level. At the same time, plastic deformations are formed in the columns and do not exceed the permissible value. At the final stage of seismic excitations in shear-walls, the level of plastic deformations reaches values corresponding to the plasticity coefficient of concrete , which is less than the maximum permissible value. Such volume of plastic deformations leads to an increase in general deformations of the bearing system. With the specified parameters of the deformation of the shear-walls in concrete columns, plastic deformations exceeding the limiting values develop, which leads to the collapse of such columns. Based on the results presented in this study, it can be concluded that the application seismic-force-reduction factor, common for the all load-bearing system, does not correspond to the real conditions of formation and accumulation of damages in elements of the load-bearing system. Using a single coefficient of seismic-force-reduction factor leads to errors in predicting the seismic resistance of reinforced concrete load-bearing systems. In order to provide the required level of seismic resistance buildings with reinforced concrete columns and separate shear-walls, it is necessary to use values of the coefficient of seismic-force-reduction factor differentiated by types of structural groups.1

Keywords: reinforced concrete structures, earthquake excitation, plasticity coefficients, seismic-force-reduction factor, nonlinear dynamic analysis

Procedia PDF Downloads 207
86 Expanding Access and Deepening Engagement: Building an Open Source Digital Platform for Restoration-Based Stem Education in the Largest Public-School System in the United States

Authors: Lauren B. Birney

Abstract:

This project focuses upon the expansion of the existing "Curriculum and Community Enterprise for the Restoration of New York Harbor in New York City Public Schools" NSF EHR DRL 1440869, NSF EHR DRL 1839656 and NSF EHR DRL 1759006. This project is recognized locally as “Curriculum and Community Enterprise for Restoration Science,” or CCERS. CCERS is a comprehensive model of ecological restoration-based STEM education for urban public-school students. Following an accelerated rollout, CCERS is now being implemented in 120+ Title 1 funded NYC Department of Education middle schools, led by two cohorts of 250 teachers, serving more than 11,000 students in total. Initial results and baseline data suggest that the CCERS model, with the Billion Oyster Project (BOP) as its local restoration ecology-based STEM curriculum, is having profound impacts on students, teachers, school leaders, and the broader community of CCERS participants and stakeholders. Students and teachers report being receptive to the CCERS model and deeply engaged in the initial phase of curriculum development, citizen science data collection, and student-centered, problem-based STEM learning. The BOP CCERS Digital Platform will serve as the central technology hub for all research, data, data analysis, resources, materials and student data to promote global interactions between communities, Research conducted included qualitative and quantitative data analysis. We continue to work internally on making edits and changes to accommodate a dynamic society. The STEM Collaboratory NYC® at Pace University New York City continues to act as the prime institution for the BOP CCERS project since the project’s inception in 2014. The project continues to strive to provide opportunities in STEM for underrepresented and underserved populations in New York City. The replicable model serves as an opportunity for other entities to create this type of collaboration within their own communities and ignite a community to come together and address the notable issue. Providing opportunities for young students to engage in community initiatives allows for a more cohesive set of stakeholders, ability for young people to network and provide additional resources for those students in need of additional support, resources and structure. The project has planted more than 47 million oysters across 12 acres and 15 reef sites, with the help of more than 8,000 students and 10,000 volunteers. Additional enhancements and features on the BOP CCERS Digital Platform will continue over the next three years through funding provided by the National Science Foundation, NSF DRL EHR 1759006/1839656 Principal Investigator Dr. Lauren Birney, Professor Pace University. Early results from the data indicate that the new version of the Platform is creating traction both nationally and internationally among community stakeholders and constituents. This project continues to focus on new collaborative partners that will support underrepresented students in STEM Education. The advanced Digital Platform will allow for us connect with other countries and networks on a larger Global scale.

Keywords: STEM education, environmental restoration science, technology, citizen science

Procedia PDF Downloads 88
85 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context

Authors: Nicole Merkle, Stefan Zander

Abstract:

Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.

Keywords: ambient intelligence, machine learning, semantic web, software agents

Procedia PDF Downloads 282
84 Early Melt Season Variability of Fast Ice Degradation Due to Small Arctic Riverine Heat Fluxes

Authors: Grace E. Santella, Shawn G. Gallaher, Joseph P. Smith

Abstract:

In order to determine the importance of small-system riverine heat flux on regional landfast sea ice breakup, our study explores the annual spring freshet of the Sagavanirktok River from 2014-2019. Seasonal heat cycling ultimately serves as the driving mechanism behind the freshet; however, as an emerging area of study, the extent to which inland thermodynamics influence coastal tundra geomorphology and connected landfast sea ice has not been extensively investigated in relation to small-scale Arctic river systems. The Sagavanirktok River is a small-to-midsized river system that flows south-to-north on the Alaskan North Slope from the Brooks mountain range to the Beaufort Sea at Prudhoe Bay. Seasonal warming in the spring rapidly melts snow and ice in a northwards progression from the Brooks Range and transitional tundra highlands towards the coast and when coupled with seasonal precipitation, results in a pulsed freshet that propagates through the Sagavanirktok River. The concentrated presence of newly exposed vegetation in the transitional tundra region due to spring melting results in higher absorption of solar radiation due to a lower albedo relative to snow-covered tundra and/or landfast sea ice. This results in spring flood runoff that advances over impermeable early-season permafrost soils with elevated temperatures relative to landfast sea ice and sub-ice flow. We examine the extent to which interannual temporal variability influences the onset and magnitude of river discharge by analyzing field measurements from the United States Geological Survey (USGS) river and meteorological observation sites. Rapid influx of heat to the Arctic Ocean via riverine systems results in a noticeable decay of landfast sea ice independent of ice breakup seaward of the shear zone. Utilizing MODIS imagery from NASA’s Terra satellite, interannual variability of river discharge is visualized, allowing for optical validation that the discharge flow is interacting with landfast sea ice. Thermal erosion experienced by sediment fast ice at the arrival of warm overflow preconditions the ice regime for rapid thawing. We investigate the extent to which interannual heat flux from the Sagavanirktok River’s freshet significantly influences the onset of local landfast sea ice breakup. The early-season warming of atmospheric temperatures is evidenced by the presence of storms which introduce liquid, rather than frozen, precipitation into the system. The resultant decreased albedo of the transitional tundra supports the positive relationship between early-season precipitation events, inland thermodynamic cycling, and degradation of landfast sea ice. Early removal of landfast sea ice increases coastal erosion in these regions and has implications for coastline geomorphology which stress industrial, ecological, and humanitarian infrastructure.

Keywords: Albedo, freshet, landfast sea ice, riverine heat flux, seasonal heat cycling

Procedia PDF Downloads 131
83 Forming-Free Resistive Switching Effect in ZnₓTiᵧHfzOᵢ Nanocomposite Thin Films for Neuromorphic Systems Manufacturing

Authors: Vladimir Smirnov, Roman Tominov, Vadim Avilov, Oleg Ageev

Abstract:

The creation of a new generation micro- and nanoelectronics elements opens up unlimited possibilities for electronic devices parameters improving, as well as developing neuromorphic computing systems. Interest in the latter is growing up every year, which is explained by the need to solve problems related to the unstructured classification of data, the construction of self-adaptive systems, and pattern recognition. However, for its technical implementation, it is necessary to fulfill a number of conditions for the basic parameters of electronic memory, such as the presence of non-volatility, the presence of multi-bitness, high integration density, and low power consumption. Several types of memory are presented in the electronics industry (MRAM, FeRAM, PRAM, ReRAM), among which non-volatile resistive memory (ReRAM) is especially distinguished due to the presence of multi-bit property, which is necessary for neuromorphic systems manufacturing. ReRAM is based on the effect of resistive switching – a change in the resistance of the oxide film between low-resistance state (LRS) and high-resistance state (HRS) under an applied electric field. One of the methods for the technical implementation of neuromorphic systems is cross-bar structures, which are ReRAM cells, interconnected by cross data buses. Such a structure imitates the architecture of the biological brain, which contains a low power computing elements - neurons, connected by special channels - synapses. The choice of the ReRAM oxide film material is an important task that determines the characteristics of the future neuromorphic system. An analysis of literature showed that many metal oxides (TiO2, ZnO, NiO, ZrO2, HfO2) have a resistive switching effect. It is worth noting that the manufacture of nanocomposites based on these materials allows highlighting the advantages and hiding the disadvantages of each material. Therefore, as a basis for the neuromorphic structures manufacturing, it was decided to use ZnₓTiᵧHfzOᵢ nanocomposite. It is also worth noting that the ZnₓTiᵧHfzOᵢ nanocomposite does not need an electroforming, which degrades the parameters of the formed ReRAM elements. Currently, this material is not well studied, therefore, the study of the effect of resistive switching in forming-free ZnₓTiᵧHfzOᵢ nanocomposite is an important task and the goal of this work. Forming-free nanocomposite ZnₓTiᵧHfzOᵢ thin film was grown by pulsed laser deposition (Pioneer 180, Neocera Co., USA) on the SiO2/TiN (40 nm) substrate. Electrical measurements were carried out using a semiconductor characterization system (Keithley 4200-SCS, USA) with W probes. During measurements, TiN film was grounded. The analysis of the obtained current-voltage characteristics showed a resistive switching from HRS to LRS resistance states at +1.87±0.12 V, and from LRS to HRS at -2.71±0.28 V. Endurance test shown that HRS was 283.21±32.12 kΩ, LRS was 1.32±0.21 kΩ during 100 measurements. It was shown that HRS/LRS ratio was about 214.55 at reading voltage of 0.6 V. The results can be useful for forming-free nanocomposite ZnₓTiᵧHfzOᵢ films in neuromorphic systems manufacturing. This work was supported by RFBR, according to the research project № 19-29-03041 mk. The results were obtained using the equipment of the Research and Education Center «Nanotechnologies» of Southern Federal University.

Keywords: nanotechnology, nanocomposites, neuromorphic systems, RRAM, pulsed laser deposition, resistive switching effect

Procedia PDF Downloads 132
82 Wind Resource Classification and Feasibility of Distributed Generation for Rural Community Utilization in North Central Nigeria

Authors: O. D. Ohijeagbon, Oluseyi O. Ajayi, M. Ogbonnaya, Ahmeh Attabo

Abstract:

This study analyzed the electricity generation potential from wind at seven sites spread across seven states of the North-Central region of Nigeria. Twenty-one years (1987 to 2007) wind speed data at a height of 10m were assessed from the Nigeria Meteorological Department, Oshodi. The data were subjected to different statistical tests and also compared with the two-parameter Weibull probability density function. The outcome shows that the monthly average wind speeds ranged between 2.2 m/s in November for Bida and 10.1 m/s in December for Jos. The yearly average ranged between 2.1m/s in 1987 for Bida and 11.8 m/s in 2002 for Jos. Also, the power density for each site was determined to range between 29.66 W/m2 for Bida and 864.96 W/m2 for Jos, Two parameters (k and c) of the Weibull distribution were found to range between 2.3 in Lokoja and 6.5 in Jos for k, while c ranged between 2.9 in Bida and 9.9m/s in Jos. These outcomes points to the fact that wind speeds at Jos, Minna, Ilorin, Makurdi and Abuja are compatible with the cut-in speeds of modern wind turbines and hence, may be economically feasible for wind-to-electricity at and above the height of 10 m. The study further assessed the potential and economic viability of standalone wind generation systems for off-grid rural communities located in each of the studied sites. A specific electric load profile was developed to suite hypothetic communities, each consisting of 200 homes, a school and a community health center. Assessment of the design that will optimally meet the daily load demand with a loss of load probability (LOLP) of 0.01 was performed, considering 2 stand-alone applications of wind and diesel. The diesel standalone system (DSS) was taken as the basis of comparison since the experimental locations have no connection to a distribution network. The HOMER® software optimizing tool was utilized to determine the optimal combination of system components that will yield the lowest life cycle cost. Sequel to the analysis for rural community utilization, a Distributed Generation (DG) analysis that considered the possibility of generating wind power in the MW range in order to take advantage of Nigeria’s tariff regime for embedded generation was carried out for each site. The DG design incorporated each community of 200 homes, freely catered for and offset from the excess electrical energy generated above the minimum requirement for sales to a nearby distribution grid. Wind DG systems were found suitable and viable in producing environmentally friendly energy in terms of life cycle cost and levelised value of producing energy at Jos ($0.14/kWh), Minna ($0.12/kWh), Ilorin ($0.09/kWh), Makurdi ($0.09/kWh), and Abuja ($0.04/kWh) at a particluar turbine hub height. These outputs reveal the value retrievable from the project after breakeven point as a function of energy consumed Based on the results, the study demonstrated that including renewable energy in the rural development plan will enhance fast upgrade of the rural communities.

Keywords: wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, North-Central Nigeria

Procedia PDF Downloads 514
81 Decreased Tricarboxylic Acid (TCA) Cycle Staphylococcus aureus Increases Survival to Innate Immunity

Authors: Trenten Theis, Trevor Daubert, Kennedy Kluthe, Austin Nuxoll

Abstract:

Staphylococcus aureus is a gram-positive bacterium responsible for an estimated 23,000 deaths in the United States and 25,000 deaths in the European Union annually. Recurring S. aureus bacteremia is associated with biofilm-mediated infections and can occur in 5 - 20% of cases, even with the use of antibiotics. Despite these infections being caused by drug-susceptible pathogens, they are surprisingly difficult to eradicate. One potential explanation for this is the presence of persister cells—a dormant type of cell that shows a high tolerance to antibiotic treatment. Recent studies have shown a connection between low intracellular ATP and persister cell formation. Specifically, this decrease in ATP, and therefore increase in persister cell formation, is due to an interrupted tricarboxylic acid (TCA) cycle. However, S. aureus persister cells’ role in pathogenesis remains unclear. Initial studies have shown that a fumC (TCA cycle gene) knockout survives challenge from aspects of the innate immune system better than wild-type S. aureus. Specifically, challenges from two antimicrobial peptides--LL-37 and hBD-3—show a log increase in survival of the fumC::N∑ strain compared to wild type S. aureus after 18 hours. Furthermore, preliminary studies show that the fumC knockout has a log more survival within a macrophage. These data lead us to hypothesize that the fumC knockout is better suited to other aspects of the innate immune system compared to wild-type S. aureus. To further investigate the mechanism for increased survival of fumC::N∑ within a macrophage, we tested bacterial growth in the presence of reactive oxygen species (ROS), reactive nitrogen species (RNS), and a low pH. Preliminary results suggest that the fumC knockout has increased growth compared to wild-type S. aureus in the presence of all three antimicrobial factors; however, no difference was observed in any single factor alone. To investigate survival within a host, a nine-day biofilm-associated catheter infection was performed on 6–8-week-old male and female C57Bl/6 mice. Although both sexes struggled to clear the infection, female mice were trending toward more frequently clearing the HG003 wild-type infection compared to the fumC::N∑ infection. One possible reason for the inability to reduce the bacterial burden is that biofilms are largely composed of persister cells. To test this hypothesis further, flow cytometry in conjunction with a persister cell marker was used to measure persister cells within a biofilm. Cap5A (a known persister cell marker) expression was found to be increased in a maturing biofilm, with the lowest levels of expression seen in immature biofilms and the highest expression exhibited by the 48-hour biofilm. Additionally, bacterial cells in a biofilm state closely resemble persister cells and exhibit reduced membrane potential compared to cells in planktonic culture, further suggesting biofilms are largely made up of persister cells. These data may provide an explanation as to why infections caused by antibiotic-susceptible strains remain difficult to treat.

Keywords: antibiotic tolerance, Staphylococcus aureus, host-pathogen interactions, microbial pathogenesis

Procedia PDF Downloads 180
80 The Analysis of Noise Harmfulness in Public Utility Facilities

Authors: Monika Sobolewska, Aleksandra Majchrzak, Bartlomiej Chojnacki, Katarzyna Baruch, Adam Pilch

Abstract:

The main purpose of the study is to perform the measurement and analysis of noise harmfulness in public utility facilities. The World Health Organization reports that the number of people suffering from hearing impairment is constantly increasing. The most alarming is the number of young people occurring in the statistics. The majority of scientific research in the field of hearing protection and noise prevention concern industrial and road traffic noise as the source of health problems. As the result, corresponding standards and regulations defining noise level limits are enforced. However, there is another field uncovered by profound research – leisure time. Public utility facilities such as clubs, shopping malls, sport facilities or concert halls – they all generate high-level noise, being out of proper juridical control. Among European Union Member States, the highest legislative act concerning noise prevention is the Environmental Noise Directive 2002/49/EC. However, it omits the problem discussed above and even for traffic, railway and aircraft noise it does not set limits or target values, leaving these issues to the discretion of the Member State authorities. Without explicit and uniform regulations, noise level control at places designed for relaxation and entertainment is often in the responsibility of people having little knowledge of hearing protection, unaware of the risk the noise pollution poses. Exposure to high sound levels in clubs, cinemas, at concerts and sports events may result in a progressive hearing loss, especially among young people, being the main target group of such facilities and events. The first step to change this situation and to raise the general awareness is to perform reliable measurements the results of which will emphasize the significance of the problem. This project presents the results of more than hundred measurements, performed in most types of public utility facilities in Poland. As the most suitable measuring instrument for such a research, personal noise dosimeters were used to collect the data. Each measurement is presented in the form of numerical results including equivalent and peak sound pressure levels and a detailed description considering the type of the sound source, size and furnishing of the room and the subjective sound level evaluation. In the absence of a straight reference point for the interpretation of the data, the limits specified in EU Directive 2003/10/EC were used for comparison. They set the maximum sound level values for workers in relation to their working time length. The analysis of the examined problem leads to the conclusion that during leisure time, people are exposed to noise levels significantly exceeding safe values. As the hearing problems are gradually progressing, most people underplay the problem, ignoring the first symptoms. Therefore, an effort has to be made to specify the noise regulations for public utility facilities. Without any action, in the foreseeable future the majority of Europeans will be dealing with serious hearing damage, which will have a negative impact on the whole societies.

Keywords: hearing protection, noise level limits, noise prevention, noise regulations, public utility facilities

Procedia PDF Downloads 225
79 On the Bias and Predictability of Asylum Cases

Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats

Abstract:

An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.

Keywords: asylum adjudications, automated decision-making, machine learning, text mining

Procedia PDF Downloads 96
78 Quantifying Firm-Level Environmental Innovation Performance: Determining the Sustainability Value of Patent Portfolios

Authors: Maximilian Elsen, Frank Tietze

Abstract:

The development and diffusion of green technologies are crucial for achieving our ambitious climate targets. The Paris Agreement commits its members to develop strategies for achieving net zero greenhouse gas emissions by the second half of the century. Governments, executives, and academics are working on net-zero strategies and the business of rating organisations on their environmental, social and governance (ESG) performance has grown tremendously in its public interest. ESG data is now commonly integrated into traditional investment analysis and an important factor in investment decisions. Creating these metrics, however, is inherently challenging as environmental and social impacts are hard to measure and uniform requirements on ESG reporting are lacking. ESG metrics are often incomplete and inconsistent as they lack fully accepted reporting standards and are often of qualitative nature. This study explores the use of patent data for assessing the environmental performance of companies by focusing on their patented inventions in the space of climate change mitigation and adaptation technologies (CCMAT). The present study builds on the successful identification of CCMAT patents. In this context, the study adopts the Y02 patent classification, a fully cross-sectional tagging scheme that is fully incorporated in the Cooperative Patent Classification (CPC), to identify Climate Change Adaptation Technologies. The Y02 classification was jointly developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) and provides means to examine technologies in the field of mitigation and adaptation to climate change across relevant technologies. This paper develops sustainability-related metrics for firm-level patent portfolios. We do so by adopting a three-step approach. First, we identify relevant CCMAT patents based on their classification as Y02 CPC patents. Second, we examine the technological strength of the identified CCMAT patents by including more traditional metrics from the field of patent analytics while considering their relevance in the space of CCMAT. Such metrics include, among others, the number of forward citations a patent receives, as well as the backward citations and the size of the focal patent family. Third, we conduct our analysis on a firm level by sector for a sample of companies from different industries and compare the derived sustainability performance metrics with the firms’ environmental and financial performance based on carbon emissions and revenue data. The main outcome of this research is the development of sustainability-related metrics for firm-level environmental performance based on patent data. This research has the potential to complement existing ESG metrics from an innovation perspective by focusing on the environmental performance of companies and putting them into perspective to conventional financial performance metrics. We further provide insights into the environmental performance of companies on a sector level. This study has implications of both academic and practical nature. Academically, it contributes to the research on eco-innovation and the literature on innovation and intellectual property (IP). Practically, the study has implications for policymakers by deriving meaningful insights into the environmental performance from an innovation and IP perspective. Such metrics are further relevant for investors and potentially complement existing ESG data.

Keywords: climate change mitigation, innovation, patent portfolios, sustainability

Procedia PDF Downloads 85
77 Exploring Empathy Through Patients’ Eyes: A Thematic Narrative Analysis of Patient Narratives in the UK

Authors: Qudsiya Baig

Abstract:

Empathy yields an unparalleled therapeutic value within patient physician interactions. Medical research is inundated with evidence to support that a physician’s ability to empathise with patients leads to a greater willingness to report symptoms, an improvement in diagnostic accuracy and safety, and a better adherence and satisfaction with treatment plans. Furthermore, the Institute of Medicine states that empathy leads to a more patient-centred care, which is one of the six main goals of a 21st century health system. However, there is a paradox between the theoretical significance of empathy and its presence, or lack thereof, in clinical practice. Recent studies have reported that empathy declines amongst students and physicians over time. The three most impactful contributors to this decline are: (1) disagreements over the definitions of empathy making it difficult to implement it into practice (2) poor consideration or regulation of empathy leading to burnout and thus, abandonment altogether, and (3) the lack of diversity in the curriculum and the influence of medical culture, which prioritises science over patient experience, limiting some physicians from using ‘too much’ empathy in the fear of losing clinical objectivity. These issues were investigated by conducting a fully inductive thematic narrative analysis of patient narratives in the UK to evaluate the behaviours and attitudes that patients associate with empathy. The principal enquiries underpinning this study included uncovering the factors that affected experience of empathy within provider-patient interactions and to analyse their effects on patient care. This research contributes uniquely to this discourse by examining the phenomenon of empathy directly from patients’ experiences, which were systematically extracted from a repository of online patient narratives of care titled ‘CareOpinion UK’. Narrative analysis was specifically chosen as the methodology to examine narratives from a phenomenological lens to focus on the particularity and context of each story. By enquiring beyond the superficial who-whatwhere, the study of narratives prescribed meaning to illness by highlighting the everyday reality of patients who face the exigent life circumstances created by suffering, disability, and the threat of life. The following six themes were found to be the most impactful in influencing the experience of empathy: dismissive behaviours, judgmental attitudes, undermining patients’ pain or concerns, holistic care and failures and successes of communication or language. For each theme there were overarching themes relating to either a failure to understand the patient’s perspective or a success in taking a person-centred approach. An in-depth analysis revealed that a lack of empathy was greatly associated with an emotive-cognitive imbalance, which disengaged physicians with their patients’ emotions. This study hereby concludes that competent providers require a combination of knowledge, skills, and more importantly empathic attitudes to help create a context for effective care. The crucial elements of that context involve (a) identifying empathy clues within interactions to engage with patients’ situations, (b) attributing a perspective to the patient through perspective-taking and (c) adapting behaviour and communication according to patient’s individual needs. Empathy underpins that context, as does an appreciation of narrative, and the two are interrelated.

Keywords: empathy, narratives, person-centred, perspective, perspective-taking

Procedia PDF Downloads 138
76 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 189
75 „Real and Symbolic in Poetics of Multiplied Screens and Images“

Authors: Kristina Horvat Blazinovic

Abstract:

In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.

Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time

Procedia PDF Downloads 174
74 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 341
73 Basic Education Curriculum in South- South Nigeria: Challenges and Opportunities of Quality Contents in the Second Language Learning

Authors: Catherine Alex Agbor

Abstract:

The modern Nigerian society is dynamic, divided in zones based on economic, political and educational resources often shared across the zones. The Six Geopolitical Zones in Nigeria is a major division in modern Nigeria, created during the regime of president Ibrahim Badamasi Babangida. They are North Central, North East, North West, South East, South South and South West. However, the zone used in this study is known as former South-Eastern State of Akwa-Ibom State and Cross-River State; former Rivers State of Bayelsa State and Rivers State; and former Mid-Western Region, Nigeria of Delta State and Edo State. Many reforms have taken place overtime, particularly in the education sector. Education is constantly presenting new ideas and innovative approaches which act to facilitate the rapid exchange of knowledge and provide quality basic education for learners. The Federal Government of Nigeria in accordance with its National Council on Education directed the Nigerian Educational Research and Development Council to restructure its basic education curriculum with the hope to enable the nation meet national and global developmental goals. One of the goals of the 9-year Basic Education Programme is developing in the entire citizenry a strong consciousness for education and a strong commitment to its vigorous promotion. Another is ensuring the acquisition of appropriate levels of literacy, numeracy, manipulative, communicative and life-skills as well as the ethical, moral and civic values for laying a solid foundation for lifelong learning. Therefore, this article at the introductory stage is aimed to describe some key issues in Nigeria’s experience in the basic education curriculum. In this study, particular attention is paid to this very recent educational policy of the Nigerian government known as Universal Basic Education, its challenges and what can be done to make the policy achieve its desired objectives. It progresses to analyze modern requirements for second language teaching; and presents the challenges of second language teaching in Nigeria. Finally, it reports a study which investigated special efforts for appropriate achievement of quality education in language classroom in the south-south zone of Nigeria. One fundamental research question was posed on what educational practices can contribute to current understanding of the structure of language curriculum. More explicitly, the study was designed to analyze the extent to which quality content contributes to current understanding of the structure of school curriculum in the zone. Otherwise stated, it investigated how student-centred educational practices impact on their learning of French language. One hundred and eighty (180) participants (teachers) were purposefully sampled for the study. Qualitative technique was used to elicit information from participants. The qualitative method used was Focus Group Discussion (FGD). Participants were divided into six groups comprising of 30 teachers from each zone. Group discussions were based mainly on curriculum contents and practices. Information from participants revealed that the curriculum content, among others is inadequate and should be re-examined. Recommendations were proffered as a panacea to concrete implementation of the basic education in Nigeria.

Keywords: basic education, quality contents, second language, south-south states

Procedia PDF Downloads 243
72 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Keywords: heterophony, modalism, serialism, synchrony, syntax

Procedia PDF Downloads 345
71 A Geospatial Approach to Coastal Vulnerability Using Satellite Imagery and Coastal Vulnerability Index: A Case Study Mauritius

Authors: Manta Nowbuth, Marie Anais Kimberley Therese

Abstract:

The vulnerability of coastal areas to storm surges stands as a critical global concern. The increasing frequency and intensity of extreme weather events have increased the risks faced by communities living along the coastlines Worldwide. Small Island developing states (SIDS) stands out as being exceptionally vulnerable, coastal regions, ecosystems of human habitation and natural forces, bear witness to the frontlines of climate-induced challenges, and the intensification of storm surges underscores the urgent need for a comprehensive understanding of coastal vulnerability. With limited landmass, low-lying terrains, and resilience on coastal resources, SIDS face an amplified vulnerability to the consequences of storm surges, the delicate balance between human activities and environmental dynamics in these island nations increases the urgency of tailored strategies for assessing and mitigating coastal vulnerability. This research uses an approach to evaluate the vulnerability of coastal communities in Mauritius. The Satellite imagery analysis makes use of sentinel satellite imageries, modified normalised difference water index, classification techniques and the DSAS add on to quantify the extent of shoreline erosion or accumulation, providing a spatial perspective on coastal vulnerability. The coastal Vulnerability Index (CVI) is applied by Gonitz et al Formula, this index considers factors such as coastal slope, sea level rise, mean significant wave height, and tidal range. Weighted assessments identify regions with varying levels of vulnerability, ranging from low to high. The study was carried out in a Village Located in the south of Mauritius, namely Rivière des Galets, with a population of about 500 people over an area of 60,000m². The Village of Rivière des Galets being located in the south, and the southern coast of Mauritius being exposed to the open Indian ocean, is vulnerable to swells, The swells generated by the South east trade winds can lead to large waves and rough sea conditions along the Southern Coastline which has an impact on the coastal activities, including fishing, tourism and coastal Infrastructures, hence, On the one hand, the results highlighted that from a stretch of 123km of coastline the linear rate regression for the 5 –year span varies from-24.1m/yr. to 8.2m/yr., the maximum rate of change in terms of eroded land is -24m/yr. and the maximum rate of accretion is 8.2m/yr. On the other hand, the coastal vulnerability index varies from 9.1 to 45.6 and it was categorised into low, moderate, high and very high risks zones. It has been observed that region which lacks protective barriers and are made of sandy beaches are categorised as high risks zone and hence it is imperative to high risk regions for immediate attention and intervention, as they will most likely be exposed to coastal hazards and impacts from climate change, which demands proactive measures for enhanced resilience and sustainable adaptation strategies.

Keywords: climate change, coastal vulnerability, disaster management, remote sensing, satellite imagery, storm surge

Procedia PDF Downloads 13
70 Implementation of Smart Card Automatic Fare Collection Technology in Small Transit Agencies for Standards Development

Authors: Walter E. Allen, Robert D. Murray

Abstract:

Many large transit agencies have adopted RFID technology and electronic automatic fare collection (AFC) or smart card systems, but small and rural agencies remain tied to obsolete manual, cash-based fare collection. Small countries or transit agencies can benefit from the implementation of smart card AFC technology with the promise of increased passenger convenience, added passenger satisfaction and improved agency efficiency. For transit agencies, it reduces revenue loss, improves passenger flow and bus stop data. For countries, further implementation into security, distribution of social services or currency transactions can provide greater benefits. However, small countries or transit agencies cannot afford expensive proprietary smart card solutions typically offered by the major system suppliers. Deployment of Contactless Fare Media System (CFMS) Standard eliminates the proprietary solution, ultimately lowering the cost of implementation. Acumen Building Enterprise, Inc. chose the Yuma County Intergovernmental Public Transportation Authority (YCIPTA) existing proprietary YCAT smart card system to implement CFMS. The revised system enables the purchase of fare product online with prepaid debit or credit cards using the Payment Gateway Processor. Open and interoperable smart card standards for transit have been developed. During the 90-day Pilot Operation conducted, the transit agency gathered the data from the bus AcuFare 200 Card Reader, loads (copies) the data to a USB Thumb Drive and uploads the data to the Acumen Host Processing Center for consolidation of the data into the transit agency master data file. The transition from the existing proprietary smart card data format to the new CFMS smart card data format was transparent to the transit agency cardholders. It was proven that open standards and interoperability design can work and reduce both implementation and operational costs for small transit agencies or countries looking to expand smart card technology. Acumen was able to avoid the implementation of the Payment Card Industry (PCI) Data Security Standards (DSS) which is expensive to develop and costly to operate on a continuing basis. Due to the substantial additional complexities of implementation and the variety of options presented to the transit agency cardholder, Acumen chose to implement only the Directed Autoload. To improve the implementation efficiency and the results for a similar undertaking, it should be considered that some passengers lack credit cards and are averse to technology. There are more than 1,300 small and rural agencies in the United States. This grows by 10 fold when considering small countries or rural locations throughout Latin American and the world. Acumen is evaluating additional countries, sites or transit agency that can benefit from the smart card systems. Frequently, payment card systems require extensive security procedures for implementation. The Project demonstrated the ability to purchase fare value, rides and passes with credit cards on the internet at a reasonable cost without highly complex security requirements.

Keywords: automatic fare collection, near field communication, small transit agencies, smart cards

Procedia PDF Downloads 284
69 The Shared Breath Project: Inhabiting Each Other’s Words and Being

Authors: Beverly Redman

Abstract:

With the Theatre Season of 2020-2021 cancelled due to COVID-19 at Purdue University, Fort Wayne, IN, USA, faculty directors found themselves scrambling to create theatre production opportunities for their students in the Department of Theatre. Redman, Chair of the Department, found her community to be suffering from anxieties brought on by a confluence of issues: the global-scale Covid-19 Pandemic, the United States’ Black Lives Matter protests erupting in cities all across the country and the coming Presidential election, arguably the most important and most contentious in the country’s history. Redman wanted to give her students the opportunity to speak not only on these issues but also to be able to record who they were at this time in their personal lives, as well as in this broad socio-political context. She also wanted to invite them into an experience of feeling empathy, too, at a time when empathy in this world seems to be sorely lacking. Returning to a mode of Devising Theatre she had used with community groups in the past, in which storytelling and re-enactment of participants’ life events combined with oral history documentation practices, Redman planned The Shared Breath Project. The process involved three months of workshops, in which participants alternated between theatre exercises and oral history collection and documentation activities as a way of generating original material for a theatre production. The goal of the first half of the project was for each participant to produce a solo piece in the form of a monologue after many generations of potential material born out of gammes, improvisations, interviews and the like. Along the way, many film and audio clips recorded the process of each person’s written documentation—documentation prepared by the subject him or herself but also by others in the group assigned to listen, watch and record. Then, in the second half of the project—and only once each participant had taken their own contributions from raw improvisatory self-presentations and through the stages of composition and performative polish, participants then exchanged their pieces. The second half of the project involved taking on each other’s words, mannerisms, gestures, melodic and rhythmic speech patterns and inhabiting them through the rehearsal process as their own, thus the title, The Shared Breath Project. Here, in stage two the acting challenges evolved to be those of capturing the other and becoming the other through accurate mimicry that embraces Denis Diderot’s concept of the Paradox of Acting, in that the actor is both seeming and being simultaneous. This paper shares the carefully documented process of making the live-streamed theatre production that resulted from these workshops, writing processes and rehearsals, and forming, The Shared Breath Project, which ultimately took the students’ Realist, life-based pieces and edited them into a single unified theatre production. The paper also utilizes research on the Paradox of Acting, putting a Post-Structuralist spin on Diderot’s theory. Here, the paper suggests the limitations of inhabiting the other by allowing that the other is always already a thing impenetrable but nevertheless worthy of unceasing empathetic, striving and delving in an epoch in which slow, careful attention to our fellows is in short supply.

Keywords: otherness, paradox of acting, oral history theatre, devised theatre, political theatre, community-based theatre, peoples’ theatre

Procedia PDF Downloads 185
68 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 60
67 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics

Authors: Varun Kumar, Chandra Shakher

Abstract:

Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.

Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy

Procedia PDF Downloads 499
66 Urban Sprawl: A Case Study of Suryapet Town in Nalgonda District of Telangana State, a Geoinformatic Approach

Authors: Ashok Kumar Lonavath, V. Sathish Kumar

Abstract:

Urban sprawl is the uncontrolled and uncoordinated outgrowth of towns and cities. The process of urban sprawl can be described by change in pattern over time, like proportional increase in built-up surface to population leading to rapid urban spatial expansion. Significant economic and livelihood opportunities in the urban areas results in lack of basic amenities due to the unplanned growth The patterns, processes, dynamic causes and consequences of sprawl can be explored and designed with the help of spatial planning support system. In India context the urban area is defined as the population more than 5000, density more than 400 persons per sq. km and 75% of the population is involved in non-agricultural occupations. India’s urban population is increasing at the rate of 2.35% pa. The class I town’s population of India according to 2011 census is 18.8% that accounts for 60.4% of total unban population. Similarly in Erstwhile Andhra Pradesh it is 22.9% which accounts for 68.8% of total urban population. Suryapet town has historical recognition as ‘Gate Way of Telangana’ in the Indian State of Andhra Pradesh. The Municipality was constituted in 1952 as Grade-III, later upgraded into Grade-II in 1984 and to Grade-I in 1998. The area is 35 Sq.kms. Three major tanks located in three different directions and Musi River is flowing from a distance of 8 kms. The average ground water table is about 50m below ground. It is a fast growing town with a population of 1, 06,805 and 25,448 households. Density is 3051pp sq km, It is a Class I city as per population census. It secured the ISO 14001-2004 certificate for establishing and maintaining an environment-friendly system for solid waste disposal. It is the first municipality in the country to receive such a certificate. It won HUDCO award under environment management, award of appreciation and cash from Ministry of Housing and Poverty Elevation from Government of India and undivided Andhra Pradesh under UN Human Settlement Programme, Greentech Excellance award, Supreme Courts appreciation for solid waste management. Foreign delegates from different countries and also from various other states of India visited Suryapet municipality for study tour and training programs as part of their official visit Suryapet is located at 17°5’ North Latitude and 79°37’ East Longitude. The average elevation is 266m, annual mean temperature is 36°C and average rainfall is 821.0 mm. The people of this town are engaged in Commercial and agriculture activities hence the town has become a centre for marketing and stocking agricultural produce. It is also educational centre in this region. The present paper on urban sprawl is a theoretical framework to analyze the interaction of planning and governance on the extent of outgrowth and level of services. The GIS techniques, SOI Toposheet, satellite imageries and image analysis techniques are extensively used to explore the sprawl and measure the urban land-use. This paper concludes outlining the challenges in addressing urban sprawl while ensuring adequate level of services that planning and governance have to ensure towards achieving sustainable urbanization.

Keywords: remote sensing, GIS, urban sprawl, urbanization

Procedia PDF Downloads 229
65 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example

Authors: Alena Nesterenko, Svetlana Petrikova

Abstract:

Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.

Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation

Procedia PDF Downloads 208
64 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice

Authors: Diana Reckien

Abstract:

Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.

Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity

Procedia PDF Downloads 395
63 Start with the Art: Early Results from a Study of Arts-Integrated Instruction for Young Children

Authors: Juliane Toce, Steven Holochwost

Abstract:

A substantial and growing literature has demonstrated that arts education benefits young children’s socioemotional and cognitive development. Less is known about the capacity of arts-integrated instruction to yield benefits to similar domains, particularly among demographically and socioeconomically diverse groups of young children. However, the small literature on this topic suggests that arts-integrated instruction may foster young children’s socioemotional and cognitive development by presenting opportunities to 1) engage in instructional content in diverse ways, 2) experience and regulate strong emotions, 3) experience growth-oriented feedback, and 4) engage in collaborative work with peers. Start with the Art is a new program of arts-integrated instruction currently being implemented in four schools in a school district that serves students from a diverse range of backgrounds. The program employs a co-teaching model in which teaching artists and classroom teachers engage in collaborative lesson planning and instruction over the course of the academic year and is currently the focus of an impact study featuring a randomized-control design, as well as an implementation study, both of which are funded through an Educational Innovation and Research grant from the United States Department of Education. The paper will present the early results from the Start with the Art implementation study. These results will provide an overview of the extent to which the program was implemented in accordance with design, with a particular emphasis on the degree to which the four opportunities enumerated above (e.g., opportunities to engage in instructional content in diverse ways) were presented to students. There will be a review key factors that may influence the fidelity of implementation, including classroom teachers’ reception of the program and the extent to which extant conditions in the classroom (e.g., the overall level of classroom organization) may have impacted implementation fidelity. With the explicit purpose of creating a program that values and meets the needs of the teachers and students, Start with the Art incorporates the feedback from individuals participating in the intervention. Tracing its trajectory from inception to ongoing development and examining the adaptive changes made in response to teachers' transformative experiences in the post-pandemic classroom, Start with the Art continues to solicit input from experts in integrating artistic content into core curricula within educational settings catering to students from under-represented backgrounds in the arts. Leveraging the input from this rich consortium of experts has allowed for a comprehensive evaluation of the program’s implementation. The early findings derived from the implementation study emphasize the potential of arts-integrated instruction to incorporate restorative practices. Such practices serve as a crucial support system for both students and educators, providing avenues for children to express themselves, heal emotionally, and foster social development, while empowering teachers to create more empathetic, inclusive, and supportive learning environments. This all-encompassing analysis spotlights Start with the Art’s adaptability to any learning environment through the program’s effectiveness, resilience, and its capacity to transform - through art - the classroom experience within the ever-evolving landscape of education.

Keywords: arts-integration, social emotional learning, diverse learners, co-teaching, teaching artists, post-pandemic teaching

Procedia PDF Downloads 62
62 Remote Sensing of Urban Land Cover Change: Trends, Driving Forces, and Indicators

Authors: Wei Ji

Abstract:

This study was conducted in the Kansas City metropolitan area of the United States, which has experienced significant urban sprawling in recent decades. The remote sensing of land cover changes in this area spanned over four decades from 1972 through 2010. The project was implemented in two stages: the first stage focused on detection of long-term trends of urban land cover change, while the second one examined how to detect the coupled effects of human impact and climate change on urban landscapes. For the first-stage study, six Landsat images were used with a time interval of about five years for the period from 1972 through 2001. Four major land cover types, built-up land, forestland, non-forest vegetation land, and surface water, were mapped using supervised image classification techniques. The study found that over the three decades the built-up lands in the study area were more than doubled, which was mainly at the expense of non-forest vegetation lands. Surprisingly and interestingly, the area also saw a significant gain in surface water coverage. This observation raised questions: How have human activities and precipitation variation jointly impacted surface water cover during recent decades? How can we detect such coupled impacts through remote sensing analysis? These questions led to the second stage of the study, in which we designed and developed approaches to detecting fine-scale surface waters and analyzing coupled effects of human impact and precipitation variation on the waters. To effectively detect urban landscape changes that might be jointly shaped by precipitation variation, our study proposed “urban wetscapes” (loosely-defined urban wetlands) as a new indicator for remote sensing detection. The study examined whether urban wetscape dynamics was a sensitive indicator of the coupled effects of the two driving forces. To better detect this indicator, a rule-based classification algorithm was developed to identify fine-scale, hidden wetlands that could not be appropriately detected based on their spectral differentiability by a traditional image classification. Three SPOT images for years 1992, 2008, and 2010, respectively were classified with this technique to generate the four types of land cover as described above. The spatial analyses of remotely-sensed wetscape changes were implemented at the scales of metropolitan, watershed, and sub-watershed, as well as based on the size of surface water bodies in order to accurately reveal urban wetscape change trends in relation to the driving forces. The study identified that urban wetscape dynamics varied in trend and magnitude from the metropolitan, watersheds, to sub-watersheds in response to human impacts at different scales. The study also found that increased precipitation in the region in the past decades swelled larger wetlands in particular while generally smaller wetlands decreased mainly due to human development activities. These results confirm that wetscape dynamics can effectively reveal the coupled effects of human impact and climate change on urban landscapes. As such, remote sensing of this indicator provides new insights into the relationships between urban land cover changes and driving forces.

Keywords: urban land cover, human impact, climate change, rule-based classification, across-scale analysis

Procedia PDF Downloads 309