Search results for: intentional bias
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 810

Search results for: intentional bias

30 Modeling Discrimination against Gay People: Predictors of Homophobic Behavior against Gay Men among High School Students in Switzerland

Authors: Patrick Weber, Daniel Gredig

Abstract:

Background and Purpose: Research has well documented the impact of discrimination and micro-aggressions on the wellbeing of gay men and, especially, adolescents. For the prevention of homophobic behavior against gay adolescents, however, the focus has to shift on those who discriminate: For the design and tailoring of prevention and intervention, it is important to understand the factors responsible for homophobic behavior such as, for example, verbal abuse. Against this background, the present study aimed to assess homophobic – in terms of verbally abusive – behavior against gay people among high school students. Furthermore, it aimed to establish the predictors of the reported behavior by testing an explanatory model. This model posits that homophobic behavior is determined by negative attitudes and knowledge. These variables are supposed to be predicted by the acceptance of traditional gender roles, religiosity, orientation toward social dominance, contact with gay men, and by the perceived expectations of parents, friends and teachers. These social-cognitive variables in turn are assumed to be determined by students’ gender, age, immigration background, formal school level, and the discussion of gay issues in class. Method: From August to October 2016, we visited 58 high school classes in 22 public schools in a county in Switzerland, and asked the 8th and 9th year students on three formal school levels to participate in survey about gender and gay issues. For data collection, we used an anonymous self-administered questionnaire filled in during class. Data were analyzed using descriptive statistics and structural equation modelling (Generalized Least Square Estimates method). The sample included 897 students, 334 in the 8th and 563 in the 9th year, aged 12–17, 51.2% being female, 48.8% male, 50.3% with immigration background. Results: A proportion of 85.4% participants reported having made homophobic statements in the 12 month before survey, 4.7% often and very often. Analysis showed that respondents’ homophobic behavior was predicted directly by negative attitudes (β=0.20), as well as by the acceptance of traditional gender roles (β=0.06), religiosity (β=–0.07), contact with gay people (β=0.10), expectations of parents (β=–0.14) and friends (β=–0.19), gender (β=–0.22) and having a South-East-European or Western- and Middle-Asian immigration background (β=0.09). These variables were predicted, in turn, by gender, age, immigration background, formal school level, and discussion of gay issues in class (GFI=0.995, AGFI=0.979, SRMR=0.0169, CMIN/df=1.199, p>0.213, adj. R2 =0.384). Conclusion: Findings evidence a high prevalence of homophobic behavior in the responding high school students. The tested explanatory model explained 38.4% of the assessed homophobic behavior. However, data did not found full support of the model. Knowledge did not turn out to be a predictor of behavior. Except for the perceived expectation of teachers and orientation toward social dominance, the social-cognitive variables were not fully mediated by attitudes. Equally, gender and immigration background predicted homophobic behavior directly. These findings demonstrate the importance of prevention and provide also leverage points for interventions against anti-gay bias in adolescents – also in social work settings as, for example, in school social work, open youth work or foster care.

Keywords: discrimination, high school students, gay men, predictors, Switzerland

Procedia PDF Downloads 329
29 Bridging Minds, Building Success Beyond Metrics: Uncovering Human Influence on Project Performance: Case Study of University of Salford

Authors: David Oyewumi Oyekunle, David Preston, Florence Ibeh

Abstract:

The paper provides an overview of the impacts of the human dimension in project management and team management on projects, which is increasingly affecting the performance of organizations. Recognizing its crucial significance, the research focuses on analyzing the psychological and interpersonal dynamics within project teams. This research is highly significant in the dynamic field of project management, as it addresses important gaps and offers vital insights that align with the constantly changing demands of the profession. A case study was conducted at the University of Salford to examine how human activity affects project management and performance. The study employed a mixed methodology to gain a deeper understanding of the real-world experiences of the subjects and project teams. Data analysis procedures to address the research objectives included the deductive approach, which involves testing a clear hypothesis or theory, as well as descriptive analysis and visualization. The survey comprised a sample size of 40 participants out of 110 project management professionals, including staff and final students in the Salford Business School, using a purposeful sampling method. To mitigate bias, the study ensured diversity in the sample by including both staff and final students. A smaller sample size allowed for more in-depth analysis and a focused exploration of the research objective. Conflicts, for example, are intricate occurrences shaped by a multitude of psychological stimuli and social interactions and may have either a deterrent perspective or a positive perspective on project performance and project management productivity. The study identified conflict elements, including culture, environment, personality, attitude, individual project knowledge, team relationships, leadership, and team dynamics among team members, as crucial human activities to minimize conflict. The findings are highly significant in the dynamic field of project management, as they address important gaps and offer vital insights that align with the constantly changing demands of the profession. It provided project professionals with valuable insights that can help them create a collaborative and high-performing project environment. Uncovering human influence on project performance, effective communication, optimal team synergy, and a keen understanding of project scope are necessary for the management of projects to attain exceptional performance and efficiency. For the research to achieve the aims of this study, it was acknowledged that the productive dynamics of teams and strong group cohesiveness are crucial for effectively managing conflicts in a beneficial and forward-thinking manner. Addressing the identified human influence will contribute to a more sustainable project management approach and offer opportunities for exploration and potential contributions to both academia and practical project management.

Keywords: human dimension, project management, team dynamics, conflict resolution

Procedia PDF Downloads 105
28 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy

Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani

Abstract:

Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.

Keywords: QCL, automation, microplastics, tissues, infrared, speed

Procedia PDF Downloads 66
27 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools

Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang

Abstract:

For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.

Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS

Procedia PDF Downloads 149
26 Electrical Transport through a Large-Area Self-Assembled Monolayer of Molecules Coupled with Graphene for Scalable Electronic Applications

Authors: Chunyang Miao, Bingxin Li, Shanglong Ning, Christopher J. B. Ford

Abstract:

While it is challenging to fabricate electronic devices close to atomic dimensions in conventional top-down lithography, molecular electronics is promising to help maintain the exponential increase in component densities via using molecular building blocks to fabricate electronic components from the bottom up. It offers smaller, faster, and more energy-efficient electronic and photonic systems. A self-assembled monolayer (SAM) of molecules is a layer of molecules that self-assembles on a substrate. They are mechanically flexible, optically transparent, low-cost, and easy to fabricate. A large-area multi-layer structure has been designed and investigated by the team, where a SAM of designed molecules is sandwiched between graphene and gold electrodes. Each molecule can act as a quantum dot, with all molecules conducting in parallel. When a source-drain bias is applied, significant current flows only if a molecular orbital (HOMO or LUMO) lies within the source-drain energy window. If electrons tunnel sequentially on and off the molecule, the charge on the molecule is well-defined and the finite charging energy causes Coulomb blockade of transport until the molecular orbital comes within the energy window. This produces ‘Coulomb diamonds’ in the conductance vs source-drain and gate voltages. For different tunnel barriers at either end of the molecule, it is harder for electrons to tunnel out of the dot than in (or vice versa), resulting in the accumulation of two or more charges and a ‘Coulomb staircase’ in the current vs voltage. This nanostructure exhibits highly reproducible Coulomb-staircase patterns, together with additional oscillations, which are believed to be attributed to molecular vibrations. Molecules are more isolated than semiconductor dots, and so have a discrete phonon spectrum. When tunnelling into or out of a molecule, one or more vibronic states can be excited in the molecule, providing additional transport channels and resulting in additional peaks in the conductance. For useful molecular electronic devices, achieving the optimum orbital alignment of molecules to the Fermi energy in the leads is essential. To explore it, a drop of ionic liquid is employed on top of the graphene to establish an electric field at the graphene, which screens poorly, gating the molecules underneath. Results for various molecules with different alignments of Fermi energy to HOMO have shown highly reproducible Coulomb-diamond patterns, which agree reasonably with DFT calculations. In summary, this large-area SAM molecular junction is a promising candidate for future electronic circuits. (1) The small size (1-10nm) of the molecules and good flexibility of the SAM lead to the scalable assembly of ultra-high densities of functional molecules, with advantages in cost, efficiency, and power dissipation. (2) The contacting technique using graphene enables mass fabrication. (3) Its well-observed Coulomb blockade behaviour, narrow molecular resonances, and well-resolved vibronic states offer good tuneability for various functionalities, such as switches, thermoelectric generators, and memristors, etc.

Keywords: molecular electronics, Coulomb blokade, electron-phonon coupling, self-assembled monolayer

Procedia PDF Downloads 63
25 Assessing the Experiences of South African and Indian Legal Profession from the Perspective of Women Representation in Higher Judiciary: The Square Peg in a Round Hole Story

Authors: Sricheta Chowdhury

Abstract:

To require a woman to choose between her work and her personal life is the most acute form of discrimination that can be meted out against her. No woman should be given a choice to choose between her motherhood and her career at Bar, yet that is the most detrimental discrimination that has been happening in Indian Bar, which no one has questioned so far. The falling number of women in practice is a reality that isn’t garnering much attention given the sharp rise in women studying law but is not being able to continue in the profession. Moving from a colonial misogynist whim to a post-colonial “new-age construct of Indian woman” façade, the policymakers of the Indian Judiciary have done nothing so far to decolonize itself from its rudimentary understanding of ‘equality of gender’ when it comes to the legal profession. Therefore, when Indian jurisprudence was (and is) swooning to the sweeping effect of transformative constitutionalism in the understanding of equality as enshrined under the Indian Constitution, one cannot help but question why the legal profession remained out of brushing effect of achieving substantive equality. The Airline industry’s discriminatory policies were not spared from criticism, nor were the policies where women’s involvement in any establishment serving liquor (Anuj Garg case), but the judicial practice did not question the stereotypical bias of gender and unequal structural practices until recently. That necessitates the need to examine the existing Bar policies and the steps taken by the regulatory bodies in assessing the situations that are in favor or against the purpose of furthering women’s issues in present-day India. From a comparative feminist point of concern, South Africa’s pro-women Bar policies are attractive to assess their applicability and extent in terms of promoting inclusivity at the Bar. This article intends to tap on these two countries’ potential in carving a niche in giving women an equal platform to play a substantive role in designing governance policies through the Judiciary. The article analyses the current gender composition of the legal profession while endorsing the concept of substantive equality as a requisite in designing an appropriate appointment process of the judges. It studies the theoretical framework on gender equality, examines the international and regional instruments and analyses the scope of welfare policies that Indian legal and regulatory bodies can undertake towards a transformative initiative in re-modeling the Judiciary to a more diverse and inclusive institution. The methodology employs a comparative and analytical understanding of doctrinal resources. It makes quantitative use of secondary data and qualitative use of primary data collected for determining the present status of Indian women legal practitioners and judges. With respect to quantitative data, statistics on the representation of women as judges and chief justices and senior advocates from their official websites from 2018 till present have been utilized. In respect of qualitative data, results of the structured interviews conducted through open and close-ended questions with retired lady judges of the higher judiciary and senior advocates of the Supreme Court of India, contacted through snowball sampling, are utilized.

Keywords: gender, higher judiciary, legal profession, representation, substantive equality

Procedia PDF Downloads 83
24 Reproductive Biology and Lipid Content of Albacore Tuna (Thunnus alalunga) in the Western Indian Ocean

Authors: Zahirah Dhurmeea, Iker Zudaire, Heidi Pethybridge, Emmanuel Chassot, Maria Cedras, Natacha Nikolic, Jerome Bourjea, Wendy West, Chandani Appadoo, Nathalie Bodin

Abstract:

Scientific advice on the status of fish stocks relies on indicators that are based on strong assumptions on biological parameters such as condition, maturity and fecundity. Currently, information on the biology of albacore tuna, Thunnus alalunga, in the Indian Ocean is scarce. Consequently, many parameters used in stock assessment models for Indian Ocean albacore originate largely from other studied stocks or species of tuna. Inclusion of incorrect biological data in stock assessment models would lead to inappropriate estimates of stock status used by fisheries manager’s to establish future catch allowances. The reproductive biology of albacore tuna in the western Indian Ocean was examined through analysis of the sex ratio, spawning season, length-at-maturity (L50), spawning frequency, fecundity and fish condition. In addition, the total lipid content (TL) and lipid class composition in the gonads, liver and muscle tissues of female albacore during the reproductive cycle was investigated. A total of 923 female and 867 male albacore were sampled from 2013 to 2015. A bias in sex-ratio was found in favour of females with fork length (LF) <100 cm. Using histological analyses and gonadosomatic index, spawning was found to occur between 10°S and 30°S, mainly to the east of Madagascar from October to January. Large females contributed more to reproduction through their longer spawning period compared to small individuals. The L50 (mean ± standard error) of female albacore was estimated at 85.3 ± 0.7 cm LF at the vitellogenic 3 oocyte stage maturity threshold. Albacore spawn on average every 2.2 days within the spawning region and spawning months from November to January. Batch fecundity varied between 0.26 and 2.09 million eggs and the relative batch fecundity (mean  standard deviation) was estimated at 53.4 ± 23.2 oocytes g-1 of somatic-gutted weight. Depending on the maturity stage, TL in ovaries ranged from 7.5 to 577.8 mg g-1 of wet weight (ww) with different proportions of phospholipids (PL), wax esters (WE), triacylglycerol (TAG) and sterol (ST). The highest TL were observed in immature (mostly TAG and PL) and spawning capable ovaries (mostly PL, WE and TAG). Liver TL varied from 21.1 to 294.8 mg g-1 (ww) and acted as an energy (mainly TAG and PL) storage prior to reproduction when the lowest TL was observed. Muscle TL varied from 2.0 to 71.7 g-1 (ww) in mature females without a clear pattern between maturity stages, although higher values of up to 117.3 g-1 (ww) was found in immature females. TL results suggest that albacore could be viewed predominantly as a capital breeder relying mostly on lipids stored before the onset of reproduction and with little additional energy derived from feeding. This study is the first one to provide new information on the reproductive development and classification of albacore in the western Indian Ocean. The reproductive parameters will reduce uncertainty in current stock assessment models which will eventually promote sustainability of the fishery.

Keywords: condition, size-at-maturity, spawning behaviour, temperate tuna, total lipid content

Procedia PDF Downloads 260
23 Comparing Perceived Restorativeness in Natural and Urban Environment: A Meta-Analysis

Authors: Elisa Menardo, Margherita Pasini, Margherita Brondino

Abstract:

A growing body of empirical research from different areas of inquiry suggests that brief contact with natural environment restore mental resources. The Attention Restoration Theory (ART) is the widespread used and empirical founded theory developed to explain why exposure to nature helps people to recovery cognitive resources. It assumes that contact with nature allows people to free (and then recovery) voluntary attention resources and thus allows them to recover from a cognitive fatigue situation. However, it was suggested that some people could have more cognitive benefit after exposure to urban environment. The objective of this study is to report the results of a meta-analysis on studies (peer-reviewed articles) comparing the restorativeness (the quality to be restorative) perceived in natural environments than those perceived in urban environments. This meta-analysis intended to estimate how much nature environments (forests, parks, boulevards) are perceived to be more restorativeness than urban ones (i.e., the magnitude of the perceived restorativeness’ difference). Moreover, given the methodological difference between study, it studied the potential role of moderator variables as participants (student or other), instrument used (Perceived Restorativeness Scale or other), and procedure (in laboratory or in situ). PsycINFO, PsycARTICLES, Scopus, SpringerLINK, Web of Science online database were used to identify all peer-review articles on restorativeness published to date (k = 167). Reference sections of obtained papers were examined for additional studies. Only 22 independent studies (with a total of 1371 participants) met inclusion criteria (direct exposure to environment, comparison between one outdoor environment with natural element and one without natural element, and restorativeness measured by self-report scale) and were included in meta-analysis. To estimate the average effect size, a random effect model (Restricted Maximum-likelihood estimator) was used because the studies included in the meta-analysis were conducted independently and using different methods in different populations, so no common effect-size was expected. The presence of publication bias was checked using trim and fill approach. Univariate moderator analysis (mixed effect model) were run to determine whether the variable coded moderated the perceived restorativeness difference. Results show that natural environments are perceived to be more restorativeness than urban environments, confirming from an empirical point of view what is now considered a knowledge gained in environmental psychology. The relevant information emerging from this study is the magnitude of the estimated average effect size, which is particularly high (d = 1.99) compared to those that are commonly observed in psychology. Significant heterogeneity between study was found (Q(19) = 503.16, p < 0.001;) and studies’ variability was very high (I2[C.I.] = 96.97% [94.61 - 98.62]). Subsequent univariate moderator analyses were not significant. Methodological difference (participants, instrument, and procedure) did not explain variability between study. Other methodological difference (e.g., research design, environment’s characteristics, light’s condition) could explain this variability between study. In the mine while, studies’ variability could be not due to methodological difference but to individual difference (age, gender, education level) and characteristics (connection to nature, environmental attitude). Furthers moderator analysis are working in progress.

Keywords: meta-analysis, natural environments, perceived restorativeness, urban environments

Procedia PDF Downloads 169
22 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 339
21 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice

Authors: Diana Reckien

Abstract:

Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.

Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity

Procedia PDF Downloads 395
20 Nanoscale Photo-Orientation of Azo-Dyes in Glassy Environments Using Polarized Optical Near-Field

Authors: S. S. Kharintsev, E. A. Chernykh, S. K. Saikin, A. I. Fishman, S. G. Kazarian

Abstract:

Recent advances in improving information storage performance are inseparably linked with circumvention of fundamental constraints such as the supermagnetic limit in heat assisted magnetic recording, charge loss tolerance in solid-state memory and the Abbe’s diffraction limit in optical storage. A substantial breakthrough in the development of nonvolatile storage devices with dimensional scaling has been achieved due to phase-change chalcogenide memory, which nowadays, meets the market needs to the greatest advantage. A further progress is aimed at the development of versatile nonvolatile high-speed memory combining potentials of random access memory and archive storage. The well-established properties of light at the nanoscale empower us to use them for recording optical information with ultrahigh density scaled down to a single molecule, which is the size of a pit. Indeed, diffraction-limited optics is able to record as much information as ~1 Gb/in2. Nonlinear optical effects, for example, two-photon fluorescence recording, allows one to decrease the extent of the pit even more, which results in the recording density up to ~100 Gb/in2. Going beyond the diffraction limit, due to the sub-wavelength confinement of light, pushes the pit size down to a single chromophore, which is, on average, of ~1 nm in length. Thus, the memory capacity can be increased up to the theoretical limit of 1 Pb/in2. Moreover, the field confinement provides faster recording and readout operations due to the enhanced light-matter interaction. This, in turn, leads to the miniaturization of optical devices and the decrease of energy supply down to ~1 μW/cm². Intrinsic features of light such as multimode, mixed polarization and angular momentum in addition to the underlying optical and holographic tools for writing/reading, enriches the storage and encryption of optical information. In particular, the finite extent of the near-field penetration, falling into a range of 50-100 nm, gives the possibility to perform 3D volume (layer-to-layer) recording/readout of optical information. In this study, we demonstrate a comprehensive evidence of isotropic-to-homeotropic phase transition of the azobenzene-functionalized polymer thin film exposed to light and dc electric field using near-field optical microscopy and scanning capacitance microscopy. We unravel a near-field Raman dichroism of a sub-10 nm thick epoxy-based side-chain azo-polymer films with polarization-controlled tip-enhanced Raman scattering. In our study, orientation of azo-chromophores is controlled with a bias voltage gold tip rather than light polarization. Isotropic in-plane and homeotropic out-of-plane arrangement of azo-chromophores in glassy environment can be distinguished with transverse and longitudinal optical near-fields. We demonstrate that both phases are unambiguously visualized by 2D mapping their local dielectric properties with scanning capacity microscopy. The stability of the polar homeotropic phase is strongly sensitive to the thickness of the thin film. We make an analysis of α-transition of the azo-polymer by detecting a temperature-dependent phase jump of an AFM cantilever when passing through the glass temperature. Overall, we anticipate further improvements in optical storage performance, which approaches to a single molecule level.

Keywords: optical memory, azo-dye, near-field, tip-enhanced Raman scattering

Procedia PDF Downloads 177
19 Subway Ridership Estimation at a Station-Level: Focus on the Impact of Bus Demand, Commercial Business Characteristics and Network Topology

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The primary purpose of this study is to develop a methodological framework to predict daily subway ridership at a station-level and to examine the association between subway ridership and bus demand incorporating commercial business facility in the vicinity of each subway station. The socio-economic characteristics, land-use, and built environment as factors may have an impact on subway ridership. However, it should be considered not only the endogenous relationship between bus and subway demand but also the characteristics of commercial business within a subway station’s sphere of influence, and integrated transit network topology. Regarding a statistical approach to estimate subway ridership at a station level, therefore it should be considered endogeneity and heteroscedastic issues which might have in the subway ridership prediction model. This study focused on both discovering the impacts of bus demand, commercial business characteristics, and network topology on subway ridership and developing more precise subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers entire Seoul city in South Korea and includes 243 stations with the temporal scope set at twenty-four hours with one-hour interval time panels each. The data for subway and bus ridership was collected Seoul Smart Card data from 2015 and 2016. Three-Stage Least Square(3SLS) approach was applied to develop daily subway ridership model as capturing the endogeneity and heteroscedasticity between bus and subway demand. Independent variables incorporating in the modeling process were commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. As a result, it was found that bus ridership and subway ridership were endogenous each other and they had a significantly positive sign of coefficients which means one transit mode could increase another transportation mode’s ridership. In other words, two transit modes of subway and bus have a mutual relationship instead of the competitive relationship. The commercial business characteristics are the most critical dimension among the independent variables. The variables of commercial business facility rate in the paper containing six types; medical, educational, recreational, financial, food service, and shopping. From the model result, a higher rate in medical, financial buildings, shopping, and food service facility lead to increment of subway ridership at a station, while recreational and educational facility shows lower subway ridership. The complex network theory was applied for estimating integrated network topology measures that cover the entire Seoul transit network system, and a framework for seeking an impact on subway ridership. The centrality measures were found to be significant and showed a positive sign indicating higher centrality led to more subway ridership at a station level. The results of model accuracy tests by out of samples provided that 3SLS model has less mean square error rather than OLS and showed the methodological approach for the 3SLS model was plausible to estimate more accurate subway ridership. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (2017R1C1B2010175).

Keywords: subway ridership, bus ridership, commercial business characteristic, endogeneity, network topology

Procedia PDF Downloads 144
18 The Dark History of American Psychiatry: Racism and Ethical Provider Responsibility

Authors: Mary Katherine Hoth

Abstract:

Despite racial and ethnic disparities in American psychiatry being well-documented, there remains an apathetic attitude among nurses and providers within the field to engage in active antiracism and provide equitable, recovery-oriented care. It is insufficient to be a “colorblind” nurse or provider and state that call care provided is identical for every patient. Maintaining an attitude of “colorblindness” perpetuates the racism prevalent throughout healthcare and leads to negative patient outcomes. The purpose of this literature review is to highlight the how the historical beginnings of psychiatry have evolved into the disparities seen in today’s practice, as well as to provide some insight on methods that providers and nurses can employ to actively participate in challenging these racial disparities. Background The application of psychiatric medicine to White people versus Black, Indigenous, and other People of Color has been distinctly different as a direct result of chattel slavery and the development of pseudoscience “diagnoses” in the 19th century. This weaponization of the mental health of Black people continues to this day. Population The populations discussed are Black, Indigenous, and other People of Color, with a primary focus on Black people’s experiences with their mental health and the field of psychiatry. Methods A literature review was conducted using CINAHL, EBSCO, MEDLINE, and PubMed databases with the following terms: psychiatry, mental health, racism, substance use, suicide, trauma-informed care, disparities and recovery-oriented care. Articles were further filtered based on meeting the criteria of peer-reviewed, full-text availability, written in English, and published between 2018 and 2023. Findings Black patients are more likely to be diagnosed with psychotic disorders and prescribed antipsychotic medications compared to White patients who were more often diagnosed with mood disorders and prescribed antidepressants. This same disparity is also seen in children and adolescents, where Black children are more likely to be diagnosed with behavior problems such as Oppositional Defiant Disorder (ODD) and White children with the same presentation are more likely to be diagnosed with Attention Hyperactivity Disorder. Medications advertisements for antipsychotics like Haldol as recent as 1974 portrayed a Black man, labeled as “agitated” and “aggressive”, a trope we still see today in police violence cases. The majority of nursing and medical school programs do not provide education on racism and how to actively combat it in practice, leaving many healthcare professionals acutely uneducated and unaware of their own biases and racism, as well as structural and institutional racism. Conclusions Racism will continue to grow wherever it is given time, space, and energy. Providers and nurses have an ethical obligation to educate themselves, actively deconstruct their personal racism and bias, and continuously engage in active antiracism by dismantling racism wherever it is encountered, be it structural, institutional, or scientific racism. Agents of change at the patient care level not only improve the outcomes of Black patients, but it will also lead the way in ensuring Black, Indigenous, and other People of Color are included in research of methods and medications in psychiatry in the future.

Keywords: disparities, psychiatry, racism, recovery-oriented care, trauma-informed care

Procedia PDF Downloads 129
17 Bio-Inspired Information Complexity Management: From Ant Colony to Construction Firm

Authors: Hamza Saeed, Khurram Iqbal Ahmad Khan

Abstract:

Effective information management is crucial for any construction project and its success. Primary areas of information generation are either the construction site or the design office. There are different types of information required at different stages of construction involving various stakeholders creating complexity. There is a need for effective management of information flows to reduce uncertainty creating complexity. Nature provides a unique perspective in terms of dealing with complexity, in particular, information complexity. System dynamics methodology provides tools and techniques to address complexity. It involves modeling and simulation techniques that help address complexity. Nature has been dealing with complex systems since its creation 4.5 billion years ago. It has perfected its system by evolution, resilience towards sudden changes, and extinction of unadaptable and outdated species that are no longer fit for the environment. Nature has been accommodating the changing factors and handling complexity forever. Humans have started to look at their natural counterparts for inspiration and solutions for their problems. This brings forth the possibility of using a biomimetics approach to improve the management practices used in the construction sector. Ants inhabit different habitats. Cataglyphis and Pogonomyrmex live in deserts, Leafcutter ants reside in rainforests, and Pharaoh ants are native to urban developments of tropical areas. Detailed studies have been done on fifty species out of fourteen thousand discovered. They provide the opportunity to study the interactions in diverse environments to generate collective behavior. Animals evolve to better adapt to their environment. The collective behavior of ants emerges from feedback through interactions among individuals, based on a combination of three basic factors: The patchiness of resources in time and space, operating cost, environmental stability, and the threat of rupture. If resources appear in patches through time and space, the response is accelerating and non-linear, and if resources are scattered, the response follows a linear pattern. If the acquisition of energy through food is faster than energy spent to get it, the default is to continue with an activity unless it is halted for some reason. If the energy spent is rather higher than getting it, the default changes to stay put unless activated. Finally, if the environment is stable and the threat of rupture is low, the activation and amplification rate is slow but steady. Otherwise, it is fast and sporadic. To further study the effects and to eliminate the environmental bias, the behavior of four different ant species were studied, namely Red Harvester ants (Pogonomyrmex Barbatus), Argentine ants (Linepithema Humile), Turtle ants (Cephalotes Goniodontus), Leafcutter ants (Genus: Atta). This study aims to improve the information system in the construction sector by providing a guideline inspired by nature with a systems-thinking approach, using system dynamics as a tool. Identified factors and their interdependencies were analyzed in the form of a causal loop diagram (CLD), and construction industry professionals were interviewed based on the developed CLD, which was validated with significance response. These factors and interdependencies in the natural system corresponds with the man-made systems, providing a guideline for effective use and flow of information.

Keywords: biomimetics, complex systems, construction management, information management, system dynamics

Procedia PDF Downloads 137
16 Childhood Sensory Sensitivity: A Potential Precursor to Borderline Personality Disorder

Authors: Valerie Porr, Sydney A. DeCaro

Abstract:

TARA for borderline personality disorder (BPD), an education and advocacy organization, helps families to compassionately and effectively deal with troubling BPD behaviors. Our psychoeducational programs focus on understanding underlying neurobiological features of BPD and evidence-based methodology integrating dialectical behavior therapy (DBT) and mentalization based therapy (MBT,) clarifying the inherent misunderstanding of BPD behaviors and improving family communication. TARA4BPD conducts online surveys, workshops, and topical webinars. For over 25 years, we have collected data from BPD helpline callers. This data drew our attention to particular childhood idiosyncrasies that seem to characterize many of the children who later met the criteria for BPD. The idiosyncrasies we observed, heightened sensory sensitivity and hypervigilance, were included in Adolf Stern’s 1938 definition of “Borderline.” This aspect of BPD has not been prioritized by personality disorder researchers, presently focused on emotion processing and social cognition in BPD. Parents described sleep reversal problems in infants who, early on, seem to exhibit dysregulation in circadian rhythm. Families describe children as supersensitive to sensory sensations, such as specific sounds, heightened sense of smell, taste, textures of foods, and an inability to tolerate various fabrics textures (i.e., seams in socks). They also exhibit high sensitivity to particular words and voice tones. Many have alexithymia and dyslexia. These children are either hypo- or hypersensitive to sensory sensations, including pain. Many suffer from fibromyalgia. BPD reactions to pain have been studied (C. Schmahl) and confirm the existence of hyper and hypo-reactions to pain stimuli in people with BPD. To date, there is little or no data regarding what comprises a normative range of sensitivity in infants and children. Many parents reported that their children were tested or treated for sensory processing disorder (SPD), learning disorders, and ADHD. SPD is not included in the DSM and is treated by occupational therapists. The overwhelming anecdotal data from thousands of parents of children who later met criteria for BPD led TARA4BPD to develop a sensitivity survey to develop evidence of the possible role of early sensory perception problems as a pre-cursor to BPD, hopefully initiating new directions in BPD research. At present, the research community seems unaware of the role supersensory sensitivity might play as an early indicator of BPD. Parents' observations of childhood sensitivity obtained through family interviews and results of an extensive online survey on sensory responses across various ages of development will be presented. People with BPD suffer from a sense of isolation and otherness that often results in later interpersonal difficulties. Early identification of supersensitive children while brain circuits are developing might decrease the development of social interaction deficits such as rejection sensitivity, self-referential processes, and negative bias, hallmarks of BPD, ultimately minimizing the maladaptive methods of coping with distress that characterizes BPD. Family experiences are an untapped resource for BPD research. It is hoped that this data will give family observations the critical credibility to inform future treatment and research directions.

Keywords: alexithymia, dyslexia, hypersensitivity, sensory processing disorder

Procedia PDF Downloads 201
15 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 139
14 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank

Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh

Abstract:

Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we  also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.

Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI

Procedia PDF Downloads 113
13 Efficacy and Safety of Sublingual Sufentanil for the Management of Acute Pain

Authors: Neil Singla, Derek Muse, Karen DiDonato, Pamela Palmer

Abstract:

Introduction: Pain is the most common reason people visit emergency rooms. Studies indicate however, that Emergency Department (ED) physicians often do not provide adequate analgesia to their patients as a result of gender and age bias, opiophobia and insufficient knowledge of and formal training in acute pain management. Novel classes of analgesics have recently been introduced, but many patients suffer from acute pain in settings where the availability of intravenous (IV) access may be limited, so there remains a clinical need for rapid-acting, potent analgesics that do not require an invasive route of delivery. A sublingual sufentanil tablet (SST), dispensed using a single-dose applicator, is in development for treatment of moderate-to-severe acute pain in a medically-supervised setting. Objective: The primary objective of this study was to demonstrate the repeat-dose efficacy, safety and tolerability of sufentanil 20 mcg and 30 mcg sublingual tablets compared to placebo for the management of acute pain as determined by the time-weighted sum of pain intensity differences (SPID) to baseline over the 12-hour study period (SPID12). Key secondary efficacy variables included SPID over the first hour (SPID1), Total pain relief over the 12-hour study period (TOTPAR12), time to perceived pain relief (PR) and time to meaningful PR. Safety variables consisted of adverse events (AE), vital signs, oxygen saturation and early termination. Methods: In this Phase 2, double-blind, dose-finding study, an equal number of male and female patients were randomly assigned in a 2:2:1 ratio to SST 20 mcg, SS 30 mcg or placebo, respectively, following bunionectomy. Study drug was dosed as needed, but not more frequently than hourly. Rescue medication was available as needed. The primary endpoint was the Summed Pain Intensity Difference to baseline over 12h (SPIDI2). Safety was assessed by continuous oxygen saturation monitoring and adverse event reporting. Results: 101 patients (51 Male/50 Female) were randomized, 100 received study treatment (intent-to-treat [ITT] population), and 91 completed the study. Reasons for early discontinuation were lack of efficacy (6), adverse events (2) and drug-dosing error (1). Mean age was 42.5 years. For the ITT population, SST 30 mcg was superior to placebo (p=0.003) for the SPID12. SPID12 scores in the active groups were superior for both male (ANOVA overall p-value =0.038) and female (ANOVA overall p-value=0.005) patients. Statistically significant differences in favour of sublingual sufentanil were also observed between the SST 30mcg and placebo group for SPID1(p<0.001), TOTPAR12(p=0.002), time to perceived PR (p=0.023) and time to meaningful PR (p=0.010). Nausea, vomiting and somnolence were more frequent in the sufentanil groups but there were no significant differences between treatment arms for the proportion of patients who prematurely terminated due to AE or inadequate analgesia. Conclusions: Sufentanil tablets dispensed sublingually using a single-dose applicator is in development for treatment of patients with moderate-to-severe acute pain in a medically-supervised setting where immediate IV access is limited. When administered sublingually, sufentanil’s pharmacokinetic profile and non-invasive delivery makes it a useful alternative to IM or IV dosing.

Keywords: acute pain, pain management, sublingual, sufentanil

Procedia PDF Downloads 356
12 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification

Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos

Abstract:

Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.

Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology

Procedia PDF Downloads 149
11 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 406
10 The BETA Module in Action: An Empirical Study on Enhancing Entrepreneurial Skills through Kearney's and Bloom's Guiding Principles

Authors: Yen Yen Tan, Lynn Lam, Cynthia Lam, Angela Koh, Edwin Seng

Abstract:

Entrepreneurial education plays a crucial role in nurturing future innovators and change-makers. Over time, significant progress has been made in refining instructional approaches to develop the necessary skills among learners effectively. Two highly valuable frameworks, Kearney's "4 Principles of Entrepreneurial Pedagogy" and Bloom's "Three Domains of Learning," serve as guiding principles in entrepreneurial education. Kearney's principles align with experiential and student-centric learning, which are crucial for cultivating an entrepreneurial mindset. The potential synergies between these frameworks hold great promise for enhancing entrepreneurial acumen among students. However, despite this potential, their integration remains largely unexplored. This study aims to bridge this gap by building upon the Business Essentials through Action (BETA) module and investigating its contributions to nurturing the entrepreneurial mindset. This study employs a quasi-experimental mixed-methods approach, combining quantitative and qualitative elements to ensure comprehensive and insightful data. A cohort of 235 students participated, with 118 enrolled in the BETA module and 117 in a traditional curriculum. Their Personal Entrepreneurial Competencies (PECs) were assessed before admission (pre-Y1) and one year into the course (post-Y1) using a comprehensive 55-item PEC questionnaire, enabling measurement of critical traits such as opportunity-seeking, persistence, and risk-taking. Rigorous computations of individual entrepreneurial competencies and overall PEC scores were performed, including a correction factor to mitigate potential self-assessment bias. The orchestration of Kearney's principles and Bloom's domains within the BETA module necessitates a granular examination. Here, qualitative revelations surface, courtesy of structured interviews aligned with contemporary research methodologies. These interviews act as a portal, ushering us into the transformative journey undertaken by students. Meanwhile, the study pivots to explore the BETA module's influence on students' entrepreneurial competencies from the vantage point of faculty members. A symphony of insights emanates from intimate focus group discussions featuring six dedicated lecturers, who share their perceptions, experiences, and reflective narratives, illuminating the profound impact of pedagogical practices embedded within the BETA module. Preliminary findings from ongoing data analysis indicate promising results, showcasing a substantial improvement in entrepreneurial skills among students participating in the BETA module. This study promises not only to elevate students' entrepreneurial competencies but also to illuminate the broader canvas of applicability for Kearney's principles and Bloom's domains. The dynamic interplay of quantitative analyses, proffering precise competency metrics, and qualitative revelations, delving into the nuanced narratives of transformative journeys, engenders a holistic understanding of this educational endeavour. Through a rigorous quasi-experimental mixed-methods approach, this research aims to establish the BETA module's effectiveness in fostering entrepreneurial acumen among students at Singapore Polytechnic, thereby contributing valuable insights to the broader discourse on educational methodologies.

Keywords: entrepreneurial education, experiential learning, pedagogical frameworks, innovative competencies

Procedia PDF Downloads 64
9 In-Depth Investigations on the Sequences of Accidents of Powered Two Wheelers Based on Police Crash Reports of Medan, North Sumatera Province Indonesia, Using Decision Aiding Processes

Authors: Bangun F., Crevits B., Bellet T., Banet A., Boy G. A., Katili I.

Abstract:

This paper seeks the incoherencies in cognitive process during an accident of Powered Two Wheelers (PTW) by understanding the factual sequences of events and causal relations for each case of accident. The principle of this approach is undertaking in-depth investigations on case per case of PTW accidents based on elaborate data acquisitions on accident sites that officially stamped in Police Crash Report (PCRs) 2012 of Medan with criteria, involved at least one PTW and resulted in serious injury and fatalities. The analysis takes into account four modules: accident chronologies, perpetrator, and victims, injury surveillance, vehicles and road infrastructures, comprising of traffic facilities, road geometry, road alignments and weather. The proposal for improvement could have provided a favorable influence on the chain of functional processes and events leading to collision. Decision Aiding Processes (DAP) assists in structuring different entities at different decisional levels, as each of these entities has its own objectives and constraints. The entities (A) are classified into 6 groups of accidents: solo PTW accidents; PTW vs. PTW; PTW vs. pedestrian; PTW vs. motor-trishaw; and PTW vs. other vehicles and consecutive crashes. The entities are also distinguished into 4 decisional levels: level of road users and street systems; operational level (crash-attended police officers or CAPO and road engineers), tactical level (Regional Traffic Police, Department of Transportation, and Department of Public Work), and strategic level (Traffic Police Headquarters (TCPHI)), parliament, Ministry of Transportation and Ministry of Public Work). These classifications will lead to conceptualization of Problem Situations (P) and Problem Formulations (I) in DAP context. The DAP concerns the sequences process of the incidents until the time the accident occurs, which can be modelled in terms of five activities of procedural rationality: identification on initial human features (IHF), investigation on proponents attributes (PrAT), on Injury Surveillance (IS), on the interaction between IHF and PrAt and IS (intercorrelation), then unravel the sequences of incidents; filtering and disclosure, which include: what needs to activate, modify or change or remove, what is new and what is priority. These can relate to the activation or modification or new establishment of law. The PrAt encompasses the problems of environmental, road infrastructure, road and traffic facilities, and road geometry. The evaluation model (MP) is generated to bridge P and I since MP is produced by the intercorrelations among IHF, PrAT and IS extracted from the PCRs 2012 of Medan. There are 7 findings of incoherences: lack of knowledge and awareness on the traffic regulations and the risks of accidents, especially when riding between 0 < x < 10 km from house, riding between 22 p.m.–05.30 a.m.; lack of engagements on procurement of IHF Data by CAPO; lack of competency of CAPO on data procurement in accident-sites; no intercorrelation among IHF and PrAt and IS in the database systems of PCRs; lack of maintenance and supervision on the availabilities and the capacities of traffic facilities and road infrastructure; instrumental bias with wash-back impacts towards the TCPHI; technical robustness with wash-back impacts towards the CAPO and TCPHI.

Keywords: decision aiding processes, evaluation model, PTW accidents, police crash reports

Procedia PDF Downloads 158
8 Reviving Customs: Examining the Vernacular Habitus in Modern Marathi Film via the Tamasha Genre

Authors: Amar Ramesh Wayal

Abstract:

Marathi cinema, an integral part of India’s diverse film industry, has significantly evolved in its storytelling and aesthetics, with the Tamasha genre being central to this evolution. Tamasha, a traditional form of Marathi theatre, features vibrant dance and music, especially the rhythmic and often suggestive musical genre, lavani. It gained cinematic prominence in the 1960s with Anant Mane’s Sangtye Aika (1959), which brought and popularized Tamasha to the silver screen, and V. Shantaram’s Pinjra (1972), an iconic Tamasha drama. Despite early success, Tamasha films declined in popularity until Natarang (2010) revitalized interest in this traditional form. This study examines the relevance and evolution of the Tamasha genre in Marathi cinema through contemporary films like Ek Hota Vidushak by Jabbar Patel (1992), Natarang (2010) by Ravi Jadhav, and Tamasha Live (2022) by Sanjay Jadhav. The selection of the films is based on their significant roles in the evolution of the Tamasha in Marathi cinema. Ek Hota Vidushak explores socio-political themes through Tamasha, Natarang depicts the struggles and emotional depth of Tamasha performers, and Tamasha Live integrates traditional Tamasha into modern cinema. By analysing films from different periods, this study highlights the genre’s reinterpretation and adaptation over time. The study employs a qualitative approach, utilizing textual analysis and cultural critique to examine the portrayal and evolution of Tamasha in selected films. It aims to illuminate the complex relationship between tradition and modernity in Marathi cinema through Foucauldian discourse analysis and Pierre Bourdieu’s concept of “vernacular habitus,” which refers to local, indigenous cultural spaces that shape people’s perceptions and expressions. By analyzing these films, the study seeks to understand how traditional cultural forms are integrated into contemporary cinematic narratives. However, this method has limitations, such as subjectivity in interpretation and the need for extensive contextual knowledge. Qualitative research can be subject to researcher bias, affecting analysis and conclusions. To mitigate this, this study maintains rigorous reflexivity and transparency regarding the researcher’s positionality. Furthermore, findings from specific film analyses may not be universally applicable to all Tamasha films or broader Marathi cinema. To enhance the study’s robustness, future research could incorporate comparative or quantitative data to complement qualitative insights. Despite these challenges, qualitative research is crucial for exploring cultural artifacts and their significance within specific contexts. By triangulating qualitative findings with diverse perspectives and acknowledging limitations, this study aims to provide a nuanced understanding of how Tamasha cinema preserves and revitalizes Maharashtra’s folk traditions while adapting them to contemporary contexts. Analyzing films by Jabbar Patel, Ravi Jadhav, and Sanjay Jadhav shows how these filmmakers balance traditional aesthetics with modern storytelling, bridging historical continuity with contemporary relevance. This study offers insights into how indigenous traditions like Tamasha continue to shape and define cinematic narratives in Maharashtra.

Keywords: Marathi cinema, Tamasha genre, vernacular habitus, discourse analysis, cultural evolution

Procedia PDF Downloads 32
7 Delivering Safer Clinical Trials; Using Electronic Healthcare Records (EHR) to Monitor, Detect and Report Adverse Events in Clinical Trials

Authors: Claire Williams

Abstract:

Randomised controlled Trials (RCTs) of efficacy are still perceived as the gold standard for the generation of evidence, and whilst advances in data collection methods are well developed, this progress has not been matched for the reporting of adverse events (AEs). Assessment and reporting of AEs in clinical trials are fraught with human error and inefficiency and are extremely time and resource intensive. Recent research conducted into the quality of reporting of AEs during clinical trials concluded it is substandard and reporting is inconsistent. Investigators commonly send reports to sponsors who are incorrectly categorised and lacking in critical information, which can complicate the detection of valid safety signals. In our presentation, we will describe an electronic data capture system, which has been designed to support clinical trial processes by reducing the resource burden on investigators, improving overall trial efficiencies, and making trials safer for patients. This proprietary technology was developed using expertise proven in the delivery of the world’s first prospective, phase 3b real-world trial, ‘The Salford Lung Study, ’ which enabled robust safety monitoring and reporting processes to be accomplished by the remote monitoring of patients’ EHRs. This technology enables safety alerts that are pre-defined by the protocol to be detected from the data extracted directly from the patients EHR. Based on study-specific criteria, which are created from the standard definition of a serious adverse event (SAE) and the safety profile of the medicinal product, the system alerts the investigator or study team to the safety alert. Each safety alert will require a clinical review by the investigator or delegate; examples of the types of alerts include hospital admission, death, hepatotoxicity, neutropenia, and acute renal failure. This is achieved in near real-time; safety alerts can be reviewed along with any additional information available to determine whether they meet the protocol-defined criteria for reporting or withdrawal. This active surveillance technology helps reduce the resource burden of the more traditional methods of AE detection for the investigators and study teams and can help eliminate reporting bias. Integration of multiple healthcare data sources enables much more complete and accurate safety data to be collected as part of a trial and can also provide an opportunity to evaluate a drug’s safety profile long-term, in post-trial follow-up. By utilising this robust and proven method for safety monitoring and reporting, a much higher risk of patient cohorts can be enrolled into trials, thus promoting inclusivity and diversity. Broadening eligibility criteria and adopting more inclusive recruitment practices in the later stages of drug development will increase the ability to understand the medicinal products risk-benefit profile across the patient population that is likely to use the product in clinical practice. Furthermore, this ground-breaking approach to AE detection not only provides sponsors with better-quality safety data for their products, but it reduces the resource burden on the investigator and study teams. With the data taken directly from the source, trial costs are reduced, with minimal data validation required and near real-time reporting enables safety concerns and signals to be detected more quickly than in a traditional RCT.

Keywords: more comprehensive and accurate safety data, near real-time safety alerts, reduced resource burden, safer trials

Procedia PDF Downloads 84
6 The Usefulness of Medical Scribes in the Emengecy Department

Authors: Victor Kang, Sirene Bellahnid, Amy Al-Simaani

Abstract:

Efficient documentation and completion of clerical tasks are pillars of efficient patient-centered care in acute settings such as the emergency department (ED). Medical scribes aid physicians with documentation, navigation of electronic health records, results gathering, and communication coordination with other healthcare teams. However, the use of medical scribes is not widespread, with some hospitals even continuing to discontinue their programs. One reason for this could be the lack of studies that have outlined concrete improvements in efficiency and patient and provider satisfaction in emergency departments before and after incorporating scribes. Methods: We conducted a review of the literature concerning the implementation of a medical scribe program and emergency department performance. For this review, a narrative synthesis accompanied by textual commentaries was chosen to present the selected papers. PubMed was searched exclusively. Initially, no date limits were set, but seeing as the electronic medical record was officially implemented in Canada in 2013, studies published after this date were preferred as they provided insight into the interplay between its implementation and scribes on quality improvement. Results: Throughput, efficiency, and cost-effectiveness were the most commonly used parameters in evaluating scribes in the Emergency Department. Important throughput metrics, specifically door-to-doctor and disposition time, were significantly decreased in emergency departments that utilized scribes. Of note, this was shown to be the case in community hospitals, where the burden of documentation and clerical tasks would fall directly upon the attending physician. Academic centers differ in that they rely heavily on residents and students; so the implementation of scribes has been shown to have limited effect on these metrics. However, unique to academic centers was the provider’s perception of incrased time for teaching was unique to academic centers. Consequently, providers express increased work satisfaction in relation to time spent with patients and in teaching. Patients, on the other hand, did not demonstrate a decrease in satisfaction in regards to the care that was provided, but there was no significant increase observed either. Of the studies we reviewed, one of the biggest limitations was the lack of significance in the data. While many individual studies reported that medical scribes in emergency rooms improved relative value units, patient satisfaction, provider satisfaction, and increased number of patients seen, there was no statistically significant improvement in the above criteria when compiled in a systematic review. There is also a clear publication bias; very few studies with negative results were published. To prove significance, data from more emergency rooms with scribe programs would need to be compiled which also includes emergency rooms who did not report noticeable benefits. Furthermore, most data sets focused only on scribes in academic centers. Conclusion: Ultimately, the literature suggests that while emergency room physicians who have access to medical scribes report higher satisfaction due to lower clerical burdens and can see more patients per shift, there is still variability in terms of patient and provider satisfaction. Whether or not this variability exists due to differences in training (in-house trainees versus contractors), population profile (adult versus pediatric), setting (academic versus community), or which shifts scribe work cannot be determined based on the studies that exist. Ultimately, more scribe programs need to be evaluated to determine whether these variables affect outcomes and prove whether scribes significantly improve emergency room efficiency.

Keywords: emergency medicine, medical scribe, scribe, documentation

Procedia PDF Downloads 90
5 Artificial Intelligence Impact on the Australian Government Public Sector

Authors: Jessica Ho

Abstract:

AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.

Keywords: artificial inteligence, machine learning, rules, governance, government

Procedia PDF Downloads 70
4 Beyond Bindis, Bhajis, Bangles, and Bhangra: Exploring Multiculturalism in Southwest England Primary Schools, Early Research Findings

Authors: Suparna Bagchi

Abstract:

Education as a discipline will probably be shaped by the importance it places on a conceptual, curricular, and pedagogical need to shift the emphasis toward transformative classrooms working for positive change through cultural diversity. Awareness of cultural diversity and race equality has heightened following George Floyd’s killing in the USA in 2020. This increasing awareness is particularly relevant in areas of historically low ethnic diversity which have lately experienced a rise in ethnic minority populations and where inclusive growth is a challenge. This research study aims to explore the perspectives of practitioners, students, and parents towards multiculturalism in four South West England primary schools. A qualitative case study methodology has been adopted framed by sociocultural theory. Data were collected through virtually conducted semi-structured interviews with school practitioners and parents, observation of students’ classroom activities, and documentary analysis of classroom displays. Although one-third of the school population includes ethnically diverse children, BAME (Black, Asian, and Minority Ethnic) characters featured in children's books published in Britain in 2019 were almost invisible, let alone a BAME main character. The Office for Standards in Education, Children's Services and Skills (Ofsted) are vocal about extending the Curriculum beyond the academic and technical arenas for pupils’ broader development and creation of an understanding and appreciation of cultural diversity. However, race equality and community cohesion which could help in the students’ broader development are not Ofsted’s school inspection criteria. The absence of culturally diverse content in the school curriculum highlighted by the 1985 Swann Report and 2007 Ajegbo Report makes England’s National Curriculum look like a Brexit policy three decades before Brexit. A revised National Curriculum may be the starting point with the teachers as curriculum framers playing a significant part. The task design is crucial where teachers can place equal importance on the interwoven elements of “how”, “what” and “why” the task is taught. Teachers need to build confidence in encouraging difficult conversations around racism, fear, indifference, and ignorance breaking the stereotypical barriers, thus helping to create students’ conception of a multicultural Britain. Research showed that trainee teachers in predominantly White areas often exhibit confined perspectives while educating children. Irrespective of the geographical location, school teachers can be equipped with culturally responsive initial and continuous professional development necessary to impart multicultural education. This may aid in the reduction of employees’ unconscious bias. This becomes distinctly pertinent to avoid horrific cases in the future like the recent one in Hackney where a Black teenager was strip-searched during period wrongly suspected of cannabis possession. Early research findings show participants’ eagerness for more ethnic diversity content incorporated in teaching and learning. However, schools are considerably dependent on the knowledge-focused Primary National Curriculum in England. Moreover, they handle issues around the intersectionality of disability, poverty, and gender. Teachers were trained in times when foregrounding ethnicity matters was not happening. Therefore, preoccupied with Curriculum requirements, intersectionality issues, and teacher preparations, schools exhibit an incapacity due to which keeping momentum on ethnic diversity is somewhat endangered.

Keywords: case study, curriculum decolonisation, inclusive education, multiculturalism, qualitative research in Covid19 times

Procedia PDF Downloads 118
3 Settlement Prediction in Cape Flats Sands Using Shear Wave Velocity – Penetration Resistance Correlations

Authors: Nanine Fouche

Abstract:

The Cape Flats is a low-lying sand-covered expanse of approximately 460 square kilometres, situated to the southeast of the central business district of Cape Town in the Western Cape of South Africa. The aeolian sands masking this area are often loose and compressible in the upper 1m to 1.5m of the surface, and there is a general exceedance of the maximum allowable settlement in these sands. The settlement of shallow foundations on Cape Flats sands is commonly predicted using the results of in-situ tests such as the SPT or DPSH due to the difficulty of retrieving undisturbed samples for laboratory testing. Varying degrees of accuracy and reliability are associated with these methods. More recently, shear wave velocity (Vs) profiles obtained from seismic testing, such as continuous surface wave tests (CSW), are being used for settlement prediction. Such predictions have the advantage of considering non-linear stress-strain behaviour of soil and the degradation of stiffness with increasing strain. CSW tests are rarely executed in the Cape Flats, whereas SPT’s are commonly performed. For this reason, and to facilitate better settlement predictions in Cape Flats sand, equations representing shear wave velocity (Vs) as a function of SPT blow count (N60) and vertical effective stress (v’) were generated by statistical regression of site investigation data. To reveal the most appropriate method of overburden correction, analyses were performed with a separate overburden term (Pa/σ’v) as well as using stress corrected shear wave velocity and SPT blow counts (correcting Vs. and N60 to Vs1and (N1)60respectively). Shear wave velocity profiles and SPT blow count data from three sites masked by Cape Flats sands were utilised to generate 80 Vs-SPT N data pairs for analysis. Investigated terrains included sites in the suburbs of Athlone, Muizenburg, and Atlantis, all underlain by windblown deposits comprising fine and medium sand with varying fines contents. Elastic settlement analysis was also undertaken for the Cape Flats sands, using a non-linear stepwise method based on small-strain stiffness estimates, which was obtained from the best Vs-N60 model and compared to settlement estimates using the general elastic solution with stiffness profiles determined using Stroud’s (1989) and Webb’s (1969) SPT N60-E transformation models. Stroud’s method considers strain level indirectly whereasWebb’smethod does not take account of the variation in elastic modulus with strain. The expression of Vs. in terms of N60 and Pa/σv’ derived from the Atlantis data set revealed the best fit with R2 = 0.83 and a standard error of 83.5m/s. Less accurate Vs-SPT N relations associated with the combined data set is presumably the result of inversion routines used in the analysis of the CSW results showcasing significant variation in relative density and stiffness with depth. The regression analyses revealed that the inclusion of a separate overburden term in the regression of Vs and N60, produces improved fits, as opposed to the stress corrected equations in which the R2 of the regression is notably lower. It is the correction of Vs and N60 to Vs1 and (N1)60 with empirical constants ‘n’ and ‘m’ prior to regression, that introduces bias with respect to overburden pressure. When comparing settlement prediction methods, both Stroud’s method (considering strain level indirectly) and the small strain stiffness method predict higher stiffnesses for medium dense and dense profiles than Webb’s method, which takes no account of strain level in the determination of soil stiffness. Webb’s method appears to be suitable for loose sands only. The Versak software appears to underestimate differences in settlement between square and strip footings of similar width. In conclusion, settlement analysis using small-strain stiffness data from the proposed Vs-N60 model for Cape Flats sands provides a way to take account of the non-linear stress-strain behaviour of the sands when calculating settlement.

Keywords: sands, settlement prediction, continuous surface wave test, small-strain stiffness, shear wave velocity, penetration resistance

Procedia PDF Downloads 175
2 Translation of Self-Inject Contraception Training Objectives Into Service Performance Outcomes

Authors: Oluwaseun Adeleke, Samuel O. Ikani, Simeon Christian Chukwu, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu

Abstract:

Background: Health service providers are offered in-service training periodically to strengthen their ability to deliver services that are ethical, quality, timely and safe. Not all capacity-building courses have successfully resulted in intended service delivery outcomes because of poor training content, design, approach, and ambiance. The Delivering Innovations in Selfcare (DISC) project developed a Moment of Truth innovation, which is a proven training model focused on improving consumer/provider interaction that leads to an increase in the voluntary uptake of subcutaneous depot medroxyprogesterone acetate (DMPA-SC) self-injection among women who opt for injectable contraception. Methodology: Six months after training on a moment of truth (MoT) training manual, the project conducted two intensive rounds of qualitative data collection and triangulation that included provider, client, and community mobilizer interviews, facility observations, and routine program data collection. Respondents were sampled according to a convenience sampling approach, and data collected was analyzed using a codebook and Atlas-TI. Providers and clients were interviewed to understand their experience, perspective, attitude, and awareness about the DMPA-SC self-inject. Data were collected from 12 health facilities in three states – eight directly trained and four cascades trained. The research team members came together for a participatory analysis workshop to explore and interpret emergent themes. Findings: Quality-of-service delivery and performance outcomes were observed to be significantly better in facilities whose providers were trained directly trained by the DISC project than in sites that received indirect training through master trainers. Facilities that were directly trained recorded SI proportions that were twice more than in cascade-trained sites. Direct training comprised of full-day and standalone didactic and interactive sessions constructed to evoke commitment, passion and conviction as well as eliminate provider bias and misconceptions in providers by utilizing human interest stories and values clarification exercises. Sessions also created compelling arguments using evidence and national guidelines. The training also prioritized demonstration sessions, utilized job aids, particularly videos, strengthened empathetic counseling – allaying client fears and concerns about SI, trained on positioning self-inject first and side effects management. Role plays and practicum was particularly useful to enable providers to retain and internalize new knowledge. These sessions provided experiential learning and the opportunity to apply one's expertise in a supervised environment where supportive feedback is provided in real-time. Cascade Training was often a shorter and abridged form of MoT training that leveraged existing training already planned by master trainers. This training was held over a four-hour period and was less emotive, focusing more on foundational DMPA-SC knowledge such as a reorientation to DMPA-SC, comparison of DMPA-SC variants, counseling framework and skills, data reporting and commodity tracking/requisition – no facility practicums. Training on self-injection was not as robust, presumably because they were not directed at methods in the contraceptive mix that align with state/organizational sponsored objectives – in this instance, fostering LARC services. Conclusion: To achieve better performance outcomes, consideration should be given to providing training that prioritizes practice-based and emotive content. Furthermore, a firm understanding and conviction about the value training offers improve motivation and commitment to accomplish and surpass service-related performance outcomes.

Keywords: training, performance outcomes, innovation, family planning, contraception, DMPA-SC, self-care, self-injection.

Procedia PDF Downloads 85
1 Reassembling a Fragmented Border Landscape at Crossroads: Indigenous Rights, Rural Sustainability, Regional Integration and Post-Colonial Justice in Hong Kong

Authors: Chiu-Yin Leung

Abstract:

This research investigates a complex assemblage among indigenous identities, socio-political organization and national apparatus in the border landscape of post-colonial Hong Kong. This former British colony had designated a transient mode of governance in its New Territories and particularly the northernmost borderland in 1951-2012. With a discriminated system of land provisions for the indigenous villagers, the place has been inherited with distinctive village-based culture, historic monuments and agrarian practices until its sovereignty return into the People’s Republic of China. In its latest development imperatives by the national strategic planning, the frontier area of Hong Kong has been identified as a strategy site for regional economic integration in South China, with cross-border projects of innovation and technology zones, mega-transport infrastructure and inter-jurisdictional arrangement. Contemporary literature theorizes borders as the material and discursive production of territoriality, which manifest in state apparatus and the daily lives of its citizens and condense in the contested articulations of power, security and citizenship. Drawing on the concept of assemblage, this paper attempts to tract how the border regime and infrastructure in Hong Kong as a city are deeply ingrained in the everyday lived spaces of the local communities but also the changing urban and regional strategies across different longitudinal moments. Through an intensive ethnographic fieldwork among the borderland villages since 2008 and the extensive analysis of colonial archives, new development plans and spatial planning frameworks, the author navigates the genealogy of the border landscape in Ta Kwu Ling frontier area and its implications as the milieu for new state space, covering heterogeneous fields particularly in indigenous rights, heritage preservation, rural sustainability and regional economy. Empirical evidence suggests an apparent bias towards indigenous power and colonial representation in classifying landscape values and conserving historical monuments. Squatter and farm tenants are often deprived of property rights, statutory participation and livelihood option in the planning process. The postcolonial bureaucracies have great difficulties in mobilizing resources to catch up with the swift, political-first approach of the mainland counterparts. Meanwhile, the cultural heritage, lineage network and memory landscape are not protected altogether with any holistic view or collaborative effort across the border. The enactment of land resumption and compensation scheme is furthermore disturbed by lineage-based customary law, technocratic bureaucracy, intra-community conflicts and multi-scalar political mobilization. As many traces of colonial misfortune and tyranny have been whitewashed without proper management, the author argues that postcolonial justice is yet reconciled in this fragmented border landscape. The assemblage of border in mainstream representation has tended to oversimplify local struggles as a collective mist and setup a wider production of schizophrenia experiences in the discussion of further economic integration among Hong Kong and other mainland cities in the Pearl River Delta Region. The research is expected to shed new light on the theorizing of border regions and postcolonialism beyond Eurocentric perspectives. In reassembling the borderland experiences with other arrays in state governance, village organization and indigenous identities, the author also suggests an alternative epistemology in reconciling socio-spatial differences and opening up imaginaries for positive interventions.

Keywords: heritage conservation, indigenous communities, post-colonial borderland, regional development, rural sustainability

Procedia PDF Downloads 207