Search results for: proxy means tests
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8620

Search results for: proxy means tests

130 Oncolytic Efficacy of Thymidine Kinase-Deleted Vaccinia Virus Strain Tiantan (oncoVV-TT) in Glioma

Authors: Seyedeh Nasim Mirbahari, Taha Azad, Mehdi Totonchi

Abstract:

Oncolytic viruses, which only replicate in tumor cells, are being extensively studied for their use in cancer therapy. A particular virus known as the vaccinia virus, a member of the poxvirus family, has demonstrated oncolytic abilities glioma. Treating Glioma with traditional methods such as chemotherapy and radiotherapy is quite challenging. Even though oncolytic viruses have shown immense potential in cancer treatment, their effectiveness in glioblastoma treatment is still low. Therefore, there is a need to improve and optimize immunotherapies for better results. In this study, we have designed oncoVV-TT, which can more effectively target tumor cells while minimizing replication in normal cells by replacing the thymidine kinase gene with a luc-p2a-GFP gene expression cassette. Human glioblastoma cell line U251 MG, rat glioblastoma cell line C6, and non-tumor cell line HFF were plated at 105 cells in a 12-well plates in 2 mL of DMEM-F2 medium with 10% FBS added to each well. Then incubated at 37°C. After 16 hours, the cells were treated with oncoVV-TT at an MOI of 0.01, 0.1 and left in the incubator for a further 24, 48, 72 and 96 hours. Viral replication assay, fluorescence imaging and viability tests, including trypan blue and crystal violet, were conducted to evaluate the cytotoxic effect of oncoVV-TT. The finding shows that oncoVV-TT had significantly higher cytotoxic activity and proliferation rates in tumor cells in a dose and time-dependent manner, with the strongest effect observed in U251 MG. To conclude, oncoVV-TT has the potential to be a promising oncolytic virus for cancer treatment, with a more cytotoxic effect in human glioblastoma cells versus rat glioma cells. To assess the effectiveness of vaccinia virus-mediated viral therapy, we have tested U251mg and C6 tumor cell lines taken from human and rat gliomas, respectively. The study evaluated oncoVV-TT's ability to replicate and lyse cells and analyzed the survival rates of the tested cell lines when treated with different doses of oncoVV-TT. Additionally, we compared the sensitivity of human and mouse glioma cell lines to the oncolytic vaccinia virus. All experiments regarding viruses were conducted under biosafety level 2. We engineered a Vaccinia-based oncolytic virus called oncoVV-TT to replicate specifically in tumor cells. To propagate the oncoVV-TT virus, HeLa cells (5 × 104/well) were plated in 24-well plates and incubated overnight to attach to the bottom of the wells. Subsequently, 10 MOI virus was added. After 48 h, cells were harvested by scraping, and viruses were collected by 3 sequential freezing and thawing cycles followed by removal of cell debris by centrifugation (1500 rpm, 5 min). The supernatant was stored at −80 ◦C for the following experiments. To measure the replication of the virus in Hela, cells (5 × 104/well) were plated in 24-well plates and incubated overnight to attach to the bottom of the wells. Subsequently, 5 MOI virus or equal dilution of PBS was added. At the treatment time of 0 h, 24 h, 48 h, 72 h and 96 h, the viral titers were determined under the fluorescence microscope (BZ-X700; Keyence, Osaka, Japan). Fluorescence intensity was quantified using the imagej software according to the manufacturer’s protocol. For the isolation of single-virus clones, HeLa cells seeded in six-well plates (5×105 cells/well). After 24 h (100% confluent), the cells were infected with a 10-fold dilution series of TianTan green fluorescent protein (GFP)virus and incubated for 4 h. To examine the cytotoxic effect of oncoVV-TT virus ofn U251mg and C6 cell, trypan blue and crystal violet assay was used.

Keywords: oncolytic virus, immune therapy, glioma, vaccinia virus

Procedia PDF Downloads 57
129 Unity in Diversity: Exploring the Psychological Processes and Mechanisms of the Sense of Community for the Chinese Nation in Ethnic Inter-embedded Communities

Authors: Jiamin Chen, Liping Yang

Abstract:

In 2007, sociologist Putnam proposed a pessimistic forecast in the United States' "Social Capital Community Benchmark Survey," suggesting that "ethnic diversity would challenge social unity and undermine social cohesion." If this pessimistic assumption were proven true, it would indicate a risk of division in diverse societies. China, with 56 ethnic groups, is a multi-ethnic country. On May 26, 2014, General Secretary Xi Jinping proposed "building ethnically inter-embedded communities to promote deeper development in interactions, exchanges, and integration among ethnic groups." Researchers unanimously agree that ethnic inter-embedded communities can serve as practical arenas and pathways for solidifying the sense of the Chinese national community However, there is no research providing evidence that ethnic inter-embedded communities can foster the sense of the Chinese national community, and the influencing factors remain unclear. This study adopts a constructivist grounded theory research approach. Convenience sampling and snowball sampling were used in the study. Data were collected in three communities in Kunming City. Twelve individuals were eventually interviewed, and the transcribed interviews totaled 187,000 words. The research has obtained ethical approval from the Ethics Committee of Nanjing Normal University (NNU202310030). The research analyzed the data and constructed theories, employing strategies such as coding, constant comparison, and theoretical sampling. The study found that: firstly, ethnic inter-embedded communities exhibit characteristics of diversity, including ethnic diversity, cultural diversity, and linguistic diversity. Diversity has positive functions, including increased opportunities for contact, promoting self-expansion, and increasing happiness; negative functions of diversity include highlighting ethnic differences, causing ethnic conflicts, and reminding of ethnic boundaries. Secondly, individuals typically engage in interactions within the community using active embedding and passive embedding strategies. Active embedding strategies include maintaining openness, focusing on similarities, and pro-diversity beliefs, which can increase external group identification, intergroup relational identity, and promote ethnic integration. Individuals using passive embedding strategies tend to focus on ethnic stereotypes, perceive stigmatization of their own ethnic group, and adopt an authoritarian-oriented approach to interactions, leading to a perception of more identity threats and ultimately rejecting ethnic integration. Thirdly, the commonality of the Chinese nation is reflected in the 56 ethnic groups as an "identity community" and "interest community," and both active and passive embedding paths affect individual understanding of the commonality of the Chinese nation. Finally, community work and environment can influence the embedding process. The research constructed a social psychological process and mechanism model for solidifying sense of the Chinese national community in ethnic inter-embedded communities. Based on this theoretical model, future research can conduct more micro-level psychological mechanism tests and intervention studies to enhance Chinese national cohesion.

Keywords: diversity, sense of the chinese national community, ethnic inter-embedded communities, ethnic group

Procedia PDF Downloads 18
128 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition

Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan

Abstract:

Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.

Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models

Procedia PDF Downloads 321
127 Harnessing the Benefits and Mitigating the Challenges of Neurosensitivity for Learners: A Mixed Methods Study

Authors: Kaaryn Cater

Abstract:

People vary in how they perceive, process, and react to internal, external, social, and emotional environmental factors; some are more sensitive than others. Compassionate people have a highly reactive nervous system and are more impacted by positive and negative environmental conditions (Differential Susceptibility). Further, some sensitive individuals are disproportionately able to benefit from positive and supportive environments without necessarily suffering negative impacts in less supportive environments (Vantage Sensitivity). Environmental sensitivity is underpinned by physiological, genetic, and personality/temperamental factors, and the phenotypic expression of high sensitivity is Sensory Processing Sensitivity. The hallmarks of Sensory Processing Sensitivity are deep cognitive processing, emotional reactivity, high levels of empathy, noticing environmental subtleties, a tendency to observe new and novel situations, and a propensity to become overwhelmed when over-stimulated. Several educational advantages associated with high sensitivity include creativity, enhanced memory, divergent thinking, giftedness, and metacognitive monitoring. High sensitivity can also lead to some educational challenges, particularly managing multiple conflicting demands and negotiating low sensory thresholds. A mixed methods study was undertaken. In the first quantitative study, participants completed the Perceived Success in Study Survey (PSISS) and the Highly Sensitive Person Scale (HSPS-12). Inclusion criteria were current or previous postsecondary education experience. The survey was presented on social media, and snowball recruitment was employed (n=365). The Excel spreadsheets were uploaded to the statistical package for the social sciences (SPSS)26, and descriptive statistics found normal distribution. T-tests and analysis of variance (ANOVA) calculations found no difference in the responses of demographic groups, and Principal Components Analysis and the posthoc Tukey calculations identified positive associations between high sensitivity and three of the five PSISS factors. Further ANOVA calculations found positive associations between the PSISS and two of the three sensitivity subscales. This study included a response field to register interest in further research. Respondents who scored in the 70th percentile on the HSPS-12 were invited to participate in a semi-structured interview. Thirteen interviews were conducted remotely (12 female). Reflexive inductive thematic analysis was employed to analyse data, and a descriptive approach was employed to present data reflective of participant experience. The results of this study found that compassionate students prioritize work-life balance; employ a range of practical metacognitive study and self-care strategies; value independent learning; connect with learning that is meaningful; and are bothered by aspects of the physical learning environment, including lighting, noise, and indoor environmental pollutants. There is a dearth of research investigating sensitivity in the educational context, and these studies highlight the need to promote widespread education sector awareness of environmental sensitivity, and the need to include sensitivity in sector and institutional diversity and inclusion initiatives.

Keywords: differential susceptibility, highly sensitive person, learning, neurosensitivity, sensory processing sensitivity, vantage sensitivity

Procedia PDF Downloads 48
126 Ecological Planning Method of Reclamation Area Based on Ecological Management of Spartina Alterniflora: A Case Study of Xihu Harbor in Xiangshan County

Authors: Dong Yue, Hua Chen

Abstract:

The study region Xihu Harbor in Xiangshan County, Ningbo City is located in the central coast of Zhejiang Province. Concerning the wave dispating issue, Ningbo government firstly introduced Spartina alterniflora in 1980s. In the 1990s, S. alterniflora spread so rapidly thus a ‘grassland’ in the sea has been created nowadays. It has become the most important invasive plant of China’s coastal tidal flats. Although S. alterniflora had some ecological and economic functions, it has also brought series of hazards. It has ecological hazards on many aspects, including biomass and biodiversity, hydrodynamic force and sedimentation process, nutrient cycling of tidal flat, succession sequence of soil and plants and so on. On engineering, it courses problems of poor drainage and channel blocking. On economy, the hazard mainly reflected in the threat on aquaculture industry. The purpose of this study is to explore an ecological, feasible and economical way to manage Spartina alterniflora and use the land formed by it, taking Xihu Harbor in Xiangshan County as a case. Comparison method, mathematical modeling, qualitative and quantitative analysis are utilized to proceed the study. Main outcomes are as follows. By comparing a series of S. alterniflora managing methods which include the combination of mechanical cutting and hydraulic reclamation, waterlogging, herbicide and biological substitution from three standpoints – ecology, engineering and economy. It is inferred that the combination of mechanical cutting and hydraulic reclamation is among the top rank of S. alternifora managing methods. The combination of mechanical cutting and hydraulic reclamation means using large-scale mechanical equipment like large screw seagoing dredger to excavate the S. alterniflora with root and mud together. Then the mix of mud and grass was blown off nearby coastal tidal zone transported by pipelines, which can cushion the silt of tidal zone to form a land. However, as man-made land by coast, the reclamation area’s ecological sensitivity is quite high and will face high possibility of flood threat. Therefore, the reclamation area has many reasonability requirements, including ones on location, specific scope, water surface rate, direction of main watercourse, site of water-gate, the ratio of ecological land to urban construction land. These requirements all became important basis when the planning was being made. The water system planning, green space system planning, road structure and land use all need to accommodate the ecological requests. Besides, the profits from the formed land is the managing project’s source of funding, so how to utilize land efficiently is another considered point in the planning. It is concluded that by aiming at managing a large area of S. alterniflora, the combination of mechanical cutting and hydraulic reclamation is an ecological, feasible and economical method. The planning of reclamation area should fully respect the natural environment and possible disasters. Then the planning which makes land use efficient, reasonable, ecological will promote the development of the area’s city construction.

Keywords: ecological management, ecological planning method, reclamation area, Spartina alternifora, Xihu harbor

Procedia PDF Downloads 293
125 EGF Serum Level in Diagnosis and Prediction of Mood Disorder in Adolescents and Young Adults

Authors: Monika Dmitrzak-Weglarz, Aleksandra Rajewska-Rager, Maria Skibinska, Natalia Lepczynska, Piotr Sibilski, Joanna Pawlak, Pawel Kapelski, Joanna Hauser

Abstract:

Epidermal growth factor (EGF) is a well-known neurotrophic factor that involves in neuronal growth and synaptic plasticity. The proteomic research provided in order to identify novel candidate biological markers for mood disorders focused on elevated EGF serum level in patients during depression episode. However, the EGF association with mood disorder spectrum among adolescents and young adults has not been studied extensively. In this study, we aim to investigate the serum levels of EGF in adolescents and young adults during hypo/manic, depressive episodes and in remission compared to healthy control group. In our study, we involved 80 patients aged 12-24 years in 2-year follow-up study with a primary diagnosis of mood disorder spectrum, and 35 healthy volunteers matched by age and gender. Diagnoses were established according to DSM-IV-TR criteria using structured clinical interviews: K-SADS for child and adolescents, and SCID for young adults. Clinical and biological evaluations were made at baseline and euthymic mood (at 3th or 6th month of treatment and after 1 and 2 years). The Young Mania Rating Scale and Hamilton Rating Scale for Depression were used for assessment. The study protocols were approved by the relevant ethics committee. Serum protein concentration was determined by Enzyme-Linked Immunosorbent Assays (ELISA) method. Human EGF (cat. no DY 236) DuoSet ELISA kit was used (R&D Systems). Serum EGF levels were analysed with following variables: age, age under 18 and above 18 years old, sex, family history of affective disorders, drug-free vs. medicated. Shapiro-Wilk test was used to test the normality of the data. The homogeneity of variance was calculated with Levene’s test. EGF levels showed non-normal distribution and the homogeneity of variance was violated. Non-parametric tests: Mann-Whitney U test, Kruskall-Wallis ANOVA, Friedman’s ANOVA, Wilcoxon signed rank test, Spearman correlation coefficient was applied in the analyses The statistical significance level was set at p<0.05. Elevated EGF level at baseline (p=0.001) and at month 24 (p=0.02) was detected in study subjects compared with controls. Increased EGF level in women at month 12 (p=0.02) compared to men in study group have been observed. Using Wilcoxon signed rank test differences in EGF levels were detected: decrease from baseline to month 3 (p=0.014) and increase comparing: month 3 vs. 24 (p=0.013); month 6 vs. 12 (p=0.021) and vs. 24 (p=0.008). EGF level at baseline was negatively correlated with depression and mania occurrence at 24 months. EGF level at 24 months was positively correlated with depression and mania occurrence at 12 months. No other correlations of EGF levels with clinical and demographical variables have been detected. The findings of the present study indicate that EGF serum level is significantly elevated in the study group of patients compared to the controls. We also observed fluctuations in EGF levels during two years of disease observation. EGF seems to be useful as an early marker for prediction of diagnosis, course of illness and treatment response in young patients during first episode od mood disorders, which requires further investigation. Grant was founded by National Science Center in Poland no 2011/03/D/NZ5/06146.

Keywords: biological marker, epidermal growth factor, mood disorders, prediction

Procedia PDF Downloads 167
124 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis

Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon

Abstract:

Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.

Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles

Procedia PDF Downloads 367
123 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy

Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells

Abstract:

Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.

Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease

Procedia PDF Downloads 261
122 Teachers Engagement to Teaching: Exploring Australian Teachers’ Attribute Constructs of Resilience, Adaptability, Commitment, Self/Collective Efficacy Beliefs

Authors: Lynn Sheridan, Dennis Alonzo, Hoa Nguyen, Andy Gao, Tracy Durksen

Abstract:

Disruptions to teaching (e.g., COVID-related) have increased work demands for teachers. There is an opportunity for research to explore evidence-informed steps to support teachers. Collective evidence informs data on teachers’ personal attributes (e.g., self-efficacy beliefs) in the workplace are seen to promote success in teaching and support teacher engagement. Teacher engagement plays a role in students’ learning and teachers’ effectiveness. Engaged teachers are better at overcoming work-related stress, burnout and are more likely to take on active roles. Teachers’ commitment is influenced by a host of personal (e.g., teacher well-being) and environmental factors (e.g., job stresses). The job demands-resources model provided a conceptual basis for examining how teachers’ well-being, and is influenced by job demands and job resources. Job demands potentially evoke strain and exceed the employee’s capability to adapt. Job resources entail what the job offers to individual teachers (e.g., organisational support), helping to reduce job demands. The application of the job demands-resources model involves gathering an evidence-base of and connection to personal attributes (job resources). The study explored the association between constructs (resilience, adaptability, commitment, self/collective efficacy) and a teacher’s engagement with the job. The paper sought to elaborate on the model and determine the associations between key constructs of well-being (resilience, adaptability), commitment, and motivation (self and collective-efficacy beliefs) to teachers’ engagement in teaching. Data collection involved online a multi-dimensional instrument using validated items distributed from 2020-2022. The instrument was designed to identify construct relationships. The participant number was 170. Data Analysis: The reliability coefficients, means, standard deviations, skewness, and kurtosis statistics for the six variables were completed. All scales have good reliability coefficients (.72-.96). A confirmatory factor analysis (CFA) and structural equation model (SEM) were performed to provide measurement support and to obtain latent correlations among factors. The final analysis was performed using structural equation modelling. Several fit indices were used to evaluate the model fit, including chi-square statistics and root mean square error of approximation. The CFA and SEM analysis was performed. The correlations of constructs indicated positive correlations exist, with the highest found between teacher engagement and resilience (r=.80) and the lowest between teacher adaptability and collective teacher efficacy (r=.22). Given the associations; we proceeded with CFA. The CFA yielded adequate fit: CFA fit: X (270, 1019) = 1836.79, p < .001, RMSEA = .04, and CFI = .94, TLI = .93 and SRMR = .04. All values were within the threshold values, indicating a good model fit. Results indicate that increasing teacher self-efficacy beliefs will increase a teacher’s level of engagement; that teacher ‘adaptability and resilience are positively associated with self-efficacy beliefs, as are collective teacher efficacy beliefs. Implications for school leaders and school systems: 1. investing in increasing teachers’ sense of efficacy beliefs to manage work demands; 2. leadership approaches can enhance teachers' adaptability and resilience; and 3. a culture of collective efficacy support. Preparing teachers for now and in the future offers an important reminder to policymakers and school leaders on the importance of supporting teachers’ personal attributes when faced with the challenging demands of the job.

Keywords: collective teacher efficacy, teacher self-efficacy, job demands, teacher engagement

Procedia PDF Downloads 65
121 Flood Early Warning and Management System

Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare

Abstract:

The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.

Keywords: flood, modeling, HPC, FOSS

Procedia PDF Downloads 73
120 Risk Factors Associated to Low Back Pain among Active Adults: Cross-Sectional Study among Workers in Tunisian Public Hospital

Authors: Lamia Bouzgarrou, Irtyah Merchaoui, Amira Omrane, Salma Kammoun, Amine Daafa, Neila Chaari

Abstract:

Backgrounds: Currently, low back pain (LBP) is one of the most prevalent public health problems, which caused severe morbidity among a large portion of the adult population. It is also associated with heavy direct and indirect costs, in particular, related to absenteeism and early retirement. Health care workers are one of most occupational groups concerned by LBP, especially because of biomechanical and psycho-organizational risk factors. Our current study aims to investigate risk factors associated with chronic low back pain among Tunisian caregivers in university-hospitals. Methods: Cross-sectional study conducted over a period of 14 months, with a representative sample of caregivers, matched according to age, sex and work department, in two university-hospitals in Tunisia. Data collection included items related to socio-professional characteristics, the evaluation of the working capacity index (WAI), the occupational stress (Karazek job strain questionnaire); the quality of life (SF12), the musculoskeletal disorders Nordic questionnaire, and the examination of the spine flexibility (distance finger-ground, sit-stand maneuver and equilibrium test). Results: Totally, 293 caregivers were included with a mean age equal to 42.64 ± 11.65 years. A body mass index (BMI) exceeding 30, was noted in 20.82% of cases. Moreover, no regular physical activity was practiced in 51.9% of cases. In contrast, domestic activity equal or exceeding 20 hours per week, was reported by 38.22%. Job strain was noted in 19.79 % of cases and the work capacity was 'low' to 'average' among 27.64% of subjects. During the 12 months previous to the investigation, 65% of caregivers complained of LBP, with pain rated as 'severe' or 'extremely severe' in 54.4% of cases and with a frequency of discomfort exceeding one episode per week in 58.52% of cases. During physical examination, the mean distance finger-ground was 7.10 ± 7.5cm. Caregivers assigned to 'high workload' services had the highest prevalence of LBP (77.4%) compared to other categories of hospital services, with no statistically significant relationship (P = 0.125). LBP prevalence was statistically correlated with female gender (p = 0.01) and impaired work capacity (p < 10⁻³). Moreover, the increase of the distance finger-ground was statistically associated with LBP (p = 0.05), advanced age (p < 10⁻³), professional seniority (p < 10⁻³) and the BMI ≥ 25 (p = 0.001). Furthermore, others physical tests of spine flexibility were underperformed among LBP suffering workers with a statistically significant difference (sit-stand maneuver (p = 0.03); equilibrium test (p = 0.01)). According to the multivariate analysis, only the domestic activity exceeding 20H/week, the degraded quality of physical life, and the presence of neck pain were significantly corelated to LBP. The final model explains 36.7% of the variability of this complaint. Conclusion: Our results highlighted the elevate prevalence of LBP among caregivers in Tunisian public hospital and identified both professional and individual predisposing factors. The preliminary analysis supports the necessity of a multidimensional approach to prevent this critical occupational and public health problem. The preventive strategy should be based both on the improvement of working conditions, and also on lifestyle modifications, and reinforcement of healthy behaviors in these active populations.

Keywords: health care workers, low back pain, prevention, risk factor

Procedia PDF Downloads 129
119 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program

Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner

Abstract:

Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NY

Keywords: accessibility, COVID-19, recovery, testing

Procedia PDF Downloads 174
118 Microstructural Characterization of Bitumen/Montmorillonite/Isocyanate Composites by Atomic Force Microscopy

Authors: Francisco J. Ortega, Claudia Roman, Moisés García-Morales, Francisco J. Navarro

Abstract:

Asphaltic bitumen has been largely used in both industrial and civil engineering, mostly in pavement construction and roofing membrane manufacture. However, bitumen as such is greatly susceptible to temperature variations, and dramatically changes its in-service behavior from a viscoelastic liquid, at medium-high temperatures, to a brittle solid at low temperatures. Bitumen modification prevents these problems and imparts improved performance. Isocyanates like polymeric MDI (mixture of 4,4′-diphenylmethane di-isocyanate, 2,4’ and 2,2’ isomers, and higher homologues) have shown to remarkably enhance bitumen properties at the highest in-service temperatures expected. This comes from the reaction between the –NCO pendant groups of the oligomer and the most polar groups of asphaltenes and resins in bitumen. In addition, oxygen diffusion and/or UV radiation may provoke bitumen hardening and ageing. With the purpose of minimizing these effects, nano-layered-silicates (nanoclays) are increasingly being added to bitumen formulations. Montmorillonites, a type of naturally occurring mineral, may produce a nanometer scale dispersion which improves bitumen thermal, mechanical and barrier properties. In order to increase their lipophilicity, these nanoclays are normally treated so that organic cations substitute the inorganic cations located in their intergallery spacing. In the present work, the combined effect of polymeric MDI and the commercial montmorillonite Cloisite® 20A was evaluated. A selected bitumen with penetration within the range 160/220 was modified with 10 wt.% Cloisite® 20A and 2 wt.% polymeric MDI, and the resulting ternary composites were characterized by linear rheology, X-ray diffraction (XRD) and Atomic Force Microscopy (AFM). The rheological tests evidenced a notable solid-like behavior at the highest temperatures studied when bitumen was just loaded with 10 wt.% Cloisite® 20A and high-shear blended for 20 minutes. However, if polymeric MDI was involved, the sequence of addition exerted a decisive control on the linear rheology of the final ternary composites. Hence, in bitumen/Cloisite® 20A/polymeric MDI formulations, the previous solid-like behavior disappeared. By contrast, an inversion in the order of addition (bitumen/polymeric MDI/ Cloisite® 20A) enhanced further the solid-like behavior imparted by the nanoclay. In order to gain a better understanding of the factors that govern the linear rheology of these ternary composites, a morphological and microstructural characterization based on XRD and AFM was conducted. XRD demonstrated the existence of clay stacks intercalated by bitumen molecules to some degree. However, the XRD technique cannot provide detailed information on the extent of nanoclay delamination, unless the entire fraction has effectively been fully delaminated (situation in which no peak is observed). Furthermore, XRD was unable to provide precise knowledge neither about the spatial distribution of the intercalated/exfoliated platelets nor about the presence of other structures at larger length scales. In contrast, AFM proved its power at providing conclusive information on the morphology of the composites at the nanometer scale and at revealing the structural modification that yielded the rheological properties observed. It was concluded that high-shear blending brought about a nanoclay-reinforced network. As for the bitumen/Cloisite® 20A/polymeric MDI formulations, the solid-like behavior was destroyed as a result of the agglomeration of the nanoclay platelets promoted by chemical reactions.

Keywords: Atomic Force Microscopy, bitumen, composite, isocyanate, montmorillonite.

Procedia PDF Downloads 241
117 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating

Procedia PDF Downloads 173
116 The Impact of Spirituality on the Voluntary Simplicity Lifestyle Tendency: An Explanatory Study on Turkish Consumers

Authors: Esna B. Buğday, Niray Tunçel

Abstract:

Spirituality has a motivational influence on consumers' psychological states, lifestyles, and behavioral intentions. Spirituality refers to the feeling that there is a divine power greater than ourselves and a connection among oneself, others, nature, and the sacred. In addition, spirituality concerns the human soul and spirit against the material and physical world and consists of three dimensions: self-discovery, relationships, and belief in a higher power. Of them, self-discovery is to explore the meaning and the purpose of life. Relationships refer to the awareness of the connection between human beings and nature as well as respect for them. In addition, higher power represents the transcendent aspect of spirituality, which means to believe in a holy power that creates all the systems in the universe. Furthermore, a voluntary simplicity lifestyle is (1) to adopt a simple lifestyle by minimizing the attachment to and the consumption of material things and possessions, (2) to have an ecological awareness respecting all living creatures, and (3) to express the desire for exploring and developing the inner life. Voluntary simplicity is a multi-dimensional construct that consists of a desire for a voluntarily simple life (e.g., avoiding excessive consumption), cautious attitudes in shopping (e.g., not buying unnecessary products), acceptance of self-sufficiency (e.g., being self-sufficient individual), and rejection of highly developed functions of products (e.g., preference for simple functioned products). One of the main reasons for living simply is to sustain a spiritual life, as voluntary simplicity provides the space for achieving psychological and spiritual growth, cultivating self-reliance since voluntary simplifier frees themselves from the overwhelming externals and takes control of their daily lives. From this point of view, it is expected that people with a strong sense of spirituality will be likely to adopt a simple lifestyle. In this respect, the study aims to examine the impact of spirituality on consumers' voluntary simple lifestyle tendencies. As consumers' consumption attitudes and behaviors depend on their lifestyles, exploring the factors that lead them to embrace voluntary simplicity significantly predicts their purchase behavior. In this respect, this study presents empirical research based on a data set collected from 478 Turkish consumers through an online survey. First, exploratory factor analysis is applied to the data to reveal the dimensions of spirituality and voluntary simplicity scales. Second, confirmatory factor analysis is conducted to assess the measurement model. Last, the hypotheses are analyzed using partial least square structural equation modeling (PLS-SEM). The results confirm that spirituality's self-discovery and relationships dimensions positively impact both cautious attitudes in shopping and acceptance of self-sufficiency dimensions of voluntary simplicity. In contrast, belief in a higher power does not significantly influence consumers' voluntary simplicity tendencies. Even though there has been theoretical support drawing a positive relationship between spirituality and voluntary simplicity, to the best of the authors' knowledge, this has not been empirically tested in the literature before. Hence, this study contributes to the current knowledge by analyzing the direct influence of spirituality on consumers' voluntary simplicity tendencies. Additionally, analyzing this impact on the consumers of an emerging market is another contribution to the literature.

Keywords: spirituality, voluntary simplicity, self-sufficiency, conscious shopping, Turkish consumers

Procedia PDF Downloads 137
115 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 147
114 Post COVID-19 Multi-System Inflammatory Syndrome Masquerading as an Acute Abdomen

Authors: Ali Baker, Russel Krawitz

Abstract:

This paper describes a rare occurrence where a potentially fatal complication of COVID-19 infection (MIS-A) was misdiagnosed as an acute abdomen. As most patients with this syndrome present with fever and gastrointestinal symptoms, they may inadvertently fall under the care of the surgical unit. However, unusual imaging findings and a poor response to anti-microbial therapy should prompt clinicians to suspect a non-surgical etiology. More than half of MIS-A patients require ICU admission and vasopressor support. Prompt referral to a physician is key, as the cornerstone of treatment is IVIG and corticosteroid therapy. A 32 year old woman presented with right sided abdominal pain and fevers. She had also contracted COVID-19 two months earlier. Abdominal examination revealed generalised right sided tenderness. The patient had raised inflammatory markers, but other blood tests were unremarkable. CT scan revealed extensive lymphadenopathy along the ileocolic chain. The patient proved to be a diagnostic dilemma. She was reviewed by several surgical consultants and discussed with several inpatient teams. Although IV antibiotics were commenced, the right sided abdominal pain, and fevers persisted. Pan-culture returned negative. A mild cholestatic derangement developed. On day 5, the patient underwent preparation for colonoscopy to assess for a potential intraluminal etiology. The following day, the patient developed sinus tachycardia and hypotension that was refractory to fluid resuscitation. That patient was transferred to ICU and required vasopressor support. Repeat CT showed peri-portal edema and a thickened gallbladder wall. On re-examination, the patient was Murphy’s sign positive. Biliary ultrasound was equivocal for cholecystitis. The patient was planned for diagnostic laparoscopy. The following morning, a marked rise in cardiac troponin was discovered, and a follow-up echocardiogram revealed moderate to severe global systolic dysfunction. The impression was post-COVID MIS with myocardial involvement. IVIG and Methylprednisolone infusions were commenced. The patient had a great response. Vasopressor support was weaned, and the patient was discharged from ICU. The patient continued to improve clinically with oral prednisolone, and was discharged on day 17. Although MIS following COVID-19 infection is well-described syndrome in children, only recently has it come to light that it can occur in adults. The exact incidence is unknown, but it is thought to be rare. A recent systematic review found only 221 cases of MIS-A, which could be included for analysis. Symptoms vary, but the most frequent include fever, gastrointestinal, and mucocutaneous. Many patients progress to multi-organ failure and require vasopressor support. 7% succumb to the illness. The pathophysiology of MIS is only partly understood. It shares similarities with Kawasaki disease, macrophage activation syndrome, and cytokine release syndrome. Importantly, by definition, the patient must have an absence of severe respiratory symptoms. It is thought to be due to a dysregulated immune response to the virus. Potential mechanisms include reduced levels of neutralising antibodies and autoreactive antibodies that promote inflammation. Further research into MIS-A is needed. Although rare, this potentially fatal syndrome should be considered in the unwell surgical patient who has recently contracted COVID-19 and poses a diagnostic dilemma.

Keywords: acute-abdomen, MIS, COVID-19, ICU

Procedia PDF Downloads 101
113 Official Seals on the Russian-Qing Treaties: Material Manifestations and Visual Enunciations

Authors: Ning Chia

Abstract:

Each of the three different language texts (Manchu, Russian, and Latin) of the 1689 Treaty of Nerchinsk bore official seals from Imperial Russia and Qing China. These seals have received no academic attention, yet they can reveal a site of a layered and shared material, cultural, political, and diplomatic world of the time in Eastern Eurasia. The very different seal selections from both empires while ratifying the Treaty of Beijing in 1860 have obtained no scholarly advertency either; they can also explicate a tremendously changed relationship with visual and material manifestation. Exploring primary sources in Manchu, Russian, and Chinese languages as well as the images of the visual seals, this study investigates the reasons and purposes of utilizing official seals for the treaty agreement. A refreshed understanding of Russian-Qing diplomacy will be developed by pursuing the following aspects: (i) Analyzing the iconographic meanings of each seal insignia and unearthing a competitive, yet symbols-delivered and seal-generated, 'dialogue' between the two empires (ii) Contextualizing treaty seals within the historical seal cultures, and discovering how domestic seal system in each empire’s political institution developed into treaty-defined bilateral relations (iii) Expounding the seal confiding in each empire’s daily governing routines, and annotating the trust in the seal as a quested promise from the opponent negotiator to fulfill the treaty terms (iv) Contrasting the two seal traditions along two civilization-lines, Eastern vs. Western, and dissecting how the two styles of seal emblems affected the cross-cultural understanding or misunderstanding between the two empires (v) Comprehending the history-making events from the substantial resources such as the treaty seals, and grasping why the seals for the two treaties, so different in both visual design and symbolic value, were chosen in the two relationship eras (vi) Correlating the materialized seal 'expression' and the imperial worldviews based on each empire’s national/or power identity, and probing the seal-represented 'rule under the Heaven' assumption of China and Russian rising role in 'European-American imperialism … centered on East Asia' (Victor Shmagin, 2020). In conclusion, the impact of official seals on diplomatic treaties needs profound knowledge in seal history, insignia culture, and emblem belief to be able to comprehend. The official seals in both Imperial Russia and Qing China belonged to a particular statecraft art in a specific material and visual form. Once utilized in diplomatic treaties, the meticulously decorated and politically institutionalized seals were transformed from the determinant means for domestic administration and social control into the markers of an empire’s sovereign authority. Overlooked in historical practice, the insignia seal created a wire of 'visual contest' between the two rival powers. Through this material lens, the scholarly knowledge of the Russian-Qing diplomatic relationship will be significantly upgraded. Connecting Russian studies, Qing/Chinese studies, and Eurasian studies, this study also ties material culture, political culture, and diplomatic culture together. It promotes the study of official seals and emblem symbols in worldwide diplomatic history.

Keywords: Russia-Qing diplomatic relation, Treaty of Beijing (1860), Treaty of Nerchinsk (1689), Treaty seals

Procedia PDF Downloads 190
112 Overlaps and Intersections: An Alternative Look at Choreography

Authors: Ashlie Latiolais

Abstract:

Architecture, as a discipline, is on a trajectory of extension beyond the boundaries of buildings and, more increasingly, is coupled with research that connects to alternative and typically disjointed disciplines. A “both/and” approach and (expanded) definition of architecture, as depicted here, expands the margins that contain the profession. Figuratively, architecture is a series of edges, events, and occurrences that establishes a choreography or stage by which humanity exists. The way in which architecture controls and suggests the movement through these spaces, being within a landscape, city, or building, can be viewed as a datum by which the “dance” of everyday life occurs. This submission views the realm of architecture through the lens of movement and dance as a cross-fertilizer of collaboration, tectonic, and spatial geometry investigations. “Designing on digital programs puts architects at a distance from the spaces they imagine. While this has obvious advantages, it also means that they lose the lived, embodied experience of feeling what is needed in space—meaning that some design ideas that work in theory ultimately fail in practice.” By studying the body in motion through real-time performance, a more holistic understanding of architectural space surfaces and new prospects for theoretical teaching pedagogies emerge. The atypical intersection rethinks how architecture is considered, created, and tested, similar to how “dance artists often do this by thinking through the body, opening pathways and possibilities that might not otherwise be accessible” –this is the essence of this poster submission as explained through unFOLDED, a creative performance work. A new languageismaterialized through unFOLDED, a dynamic occupiable installation by which architecture is investigated through dance, movement, and body analysis. The entry unfolds a collaboration of an architect, dance choreographer, musicians, video artist, and lighting designers to re-create one of the first documented avant-garde performing arts collaborations (Matisse, Satie, Massine, Picasso) from the Ballet Russes in 1917, entitled Parade. Architecturally, this interdisciplinary project orients and suggests motion through structure, tectonic, lightness, darkness, and shadow as it questions the navigation of the dark space (stage) surrounding the installation. Artificial light via theatrical lighting and video graphics brought the blank canvas to life – where the sensitive mix of musicality coordinated with the structure’s movement sequencing was certainly a challenge. The upstage light from the video projections created both flickered contextual imagery and shadowed figures. When the dancers were either upstage or downstage of the structure, both silhouetted figures and revealed bodies are experienced as dancer-controlled installation manipulations occurred throughout the performance. The experimental performance, through structure, prompted moving (dancing) bodies in space, where the architecture served as a key component to the choreography itself. The tectonic of the delicate steel structure allowed for the dancers to interact with the installation, which created a variety of spatial conditions – the contained box of three-dimensional space, to a wall, and various abstracted geometries in between. The development of this research unveils the new role of an Architect as a Choreographer of the built environment.

Keywords: dance, architecture, choreography, installation, architect, choreographer, space

Procedia PDF Downloads 72
111 Autonomous Strategic Aircraft Deconfliction in a Multi-Vehicle Low Altitude Urban Environment

Authors: Loyd R. Hook, Maryam Moharek

Abstract:

With the envisioned future growth of low altitude urban aircraft operations for airborne delivery service and advanced air mobility, strategies to coordinate and deconflict aircraft flight paths must be prioritized. Autonomous coordination and planning of flight trajectories is the preferred approach to the future vision in order to increase safety, density, and efficiency over manual methods employed today. Difficulties arise because any conflict resolution must be constrained by all other aircraft, all airspace restrictions, and all ground-based obstacles in the vicinity. These considerations make pair-wise tactical deconfliction difficult at best and unlikely to find a suitable solution for the entire system of vehicles. In addition, more traditional methods which rely on long time scales and large protected zones will artificially limit vehicle density and drastically decrease efficiency. Instead, strategic planning, which is able to respond to highly dynamic conditions and still account for high density operations, will be required to coordinate multiple vehicles in the highly constrained low altitude urban environment. This paper develops and evaluates such a planning algorithm which can be implemented autonomously across multiple aircraft and situations. Data from this evaluation provide promising results with simulations showing up to 10 aircraft deconflicted through a relatively narrow low-altitude urban canyon without any vehicle to vehicle or obstacle conflict. The algorithm achieves this level of coordination beginning with the assumption that each vehicle is controlled to follow an independently constructed flight path, which is itself free of obstacle conflict and restricted airspace. Then, by preferencing speed change deconfliction maneuvers constrained by the vehicles flight envelope, vehicles can remain as close to the original planned path and prevent cascading vehicle to vehicle conflicts. Performing the search for a set of commands which can simultaneously ensure separation for each pair-wise aircraft interaction and optimize the total velocities of all the aircraft is further complicated by the fact that each aircraft's flight plan could contain multiple segments. This means that relative velocities will change when any aircraft achieves a waypoint and changes course. Additionally, the timing of when that aircraft will achieve a waypoint (or, more directly, the order upon which all of the aircraft will achieve their respective waypoints) will change with the commanded speed. Put all together, the continuous relative velocity of each vehicle pair and the discretized change in relative velocity at waypoints resembles a hybrid reachability problem - a form of control reachability. This paper proposes two methods for finding solutions to these multi-body problems. First, an analytical formulation of the continuous problem is developed with an exhaustive search of the combined state space. However, because of computational complexity, this technique is only computable for pairwise interactions. For more complicated scenarios, including the proposed 10 vehicle example, a discretized search space is used, and a depth-first search with early stopping is employed to find the first solution that solves the constraints.

Keywords: strategic planning, autonomous, aircraft, deconfliction

Procedia PDF Downloads 75
110 Optimization and Coordination of Organic Product Supply Chains under Competition: An Analytical Modeling Perspective

Authors: Mohammadreza Nematollahi, Bahareh Mosadegh Sedghy, Alireza Tajbakhsh

Abstract:

The last two decades have witnessed substantial attention to organic and sustainable agricultural supply chains. Motivated by real-world practices, this paper aims to address two main challenges observed in organic product supply chains: decentralized decision-making process between farmers and their retailers, and competition between organic products and their conventional counterparts. To this aim, an agricultural supply chain consisting of two farmers, a conventional farmer and an organic farmer who offers an organic version of the same product, is considered. Both farmers distribute their products through a single retailer, where there exists competition between the organic and the conventional product. The retailer, as the market leader, sets the wholesale price, and afterward, the farmers set their production quantity decisions. This paper first models the demand functions of the conventional and organic products by incorporating the effect of asymmetric brand equity, which captures the fact that consumers usually pay a premium for organic due to positive perceptions regarding their health and environmental benefits. Then, profit functions with consideration of some characteristics of organic farming, including crop yield gap and organic cost factor, are modeled. Our research also considers both economies and diseconomies of scale in farming production as well as the effects of organic subsidy paid by the government to support organic farming. This paper explores the investigated supply chain in three scenarios: decentralized, centralized, and coordinated decision-making structures. In the decentralized scenario, the conventional and organic farmers and the retailer maximize their own profits individually. In this case, the interaction between the farmers is modeled under the Bertrand competition, while analyzing the interaction between the retailer and farmers under the Stackelberg game structure. In the centralized model, the optimal production strategies are obtained from the entire supply chain perspective. Analytical models are developed to derive closed-form optimal solutions. Moreover, analytical sensitivity analyses are conducted to explore the effects of main parameters like the crop yield gap, organic cost factor, organic subsidy, and percent price premium of the organic product on the farmers’ and retailer’s optimal strategies. Afterward, a coordination scenario is proposed to convince the three supply chain members to shift from the decentralized to centralized decision-making structure. The results indicate that the proposed coordination scenario provides a win-win-win situation for all three members compared to the decentralized model. Moreover, our paper demonstrates that the coordinated model respectively increases and decreases the production and price of organic produce, which in turn motivates the consumption of organic products in the market. Moreover, the proposed coordination model helps the organic farmer better handle the challenges of organic farming, including the additional cost and crop yield gap. Last but not least, our results highlight the active role of the organic subsidy paid by the government as a means of promoting sustainable organic product supply chains. Our paper shows that although the amount of organic subsidy plays a significant role in the production and sales price of organic products, the allocation method of subsidy between the organic farmer and retailer is not of that importance.

Keywords: analytical game-theoretic model, product competition, supply chain coordination, sustainable organic supply chain

Procedia PDF Downloads 88
109 The Effects of Circadian Rhythms Change in High Latitudes

Authors: Ekaterina Zvorykina

Abstract:

Nowadays, Arctic and Antarctic regions are distinguished to be one of the most important strategic resources for global development. Nonetheless, living conditions in Arctic regions still demand certain improvements. As soon as the region is rarely populated, one of the main points of interest is health accommodation of the people, who migrate to Arctic region for permanent and shift work. At Arctic and Antarctic latitudes, personnel face polar day and polar night conditions during the time of the year. It means that they are deprived of natural sunlight in winter season and have continuous daylight in summer. Firstly, the change in light intensity during 24-hours period due to migration affects circadian rhythms. Moreover, the controlled artificial light in winter is also an issue. The results of the recent studies on night shift medical professionals, who were exposed to permanent artificial light, have already demonstrated higher risks in cancer, depression, Alzheimer disease. Moreover, people exposed to frequent time zones change are also subjected to higher risks of heart attack and cancer. Thus, our main goals are to understand how high latitude work and living conditions can affect human health and how it can be prevented. In our study, we analyze molecular and cellular factors, which play important role in circadian rhythm change and distinguish main risk groups in people, migrating to high latitudes. The main well-studied index of circadian timing is melatonin or its metabolite 6-sulfatoxymelatonin. In low light intensity melatonin synthesis is disturbed and as a result human organism requires more time for sleep, which is still disregarded when it comes to working time organization. Lack of melatonin also causes shortage in serotonin production, which leads to higher depression risk. Melatonin is also known to inhibit oncogenes and increase apoptosis level in cells, the main factors for tumor growth, as well as circadian clock genes (for example Per2). Thus, people who work in high latitudes can be distinguished as a risk group for cancer diseases and demand more attention. Clock/Clock genes, known to be one of the main circadian clock regulators, decrease sensitivity of hypothalamus to estrogen and decrease glucose sensibility, which leads to premature aging and oestrous cycle disruption. Permanent light exposure also leads to accumulation superoxide dismutase and oxidative stress, which is one of the main factors for early dementia and Alzheimer disease. We propose a new screening system adjusted for people, migrating from middle to high latitudes and accommodation therapy. Screening is focused on melatonin and estrogen levels, sleep deprivation and neural disorders, depression level, cancer risks and heart and vascular disorders. Accommodation therapy includes different types artificial light exposure, additional melatonin and neuroprotectors. Preventive procedures can lead to increase of migration intensity to high latitudes and, as a result, the prosperity of Arctic region.

Keywords: circadian rhythm, high latitudes, melatonin, neuroprotectors

Procedia PDF Downloads 130
108 South African Breast Cancer Mutation Spectrum: Pitfalls to Copy Number Variation Detection Using Internationally Designed Multiplex Ligation-Dependent Probe Amplification and Next Generation Sequencing Panels

Authors: Jaco Oosthuizen, Nerina C. Van Der Merwe

Abstract:

The National Health Laboratory Services in Bloemfontien has been the diagnostic testing facility for 1830 patients for familial breast cancer since 1997. From the cohort, 540 were comprehensively screened using High-Resolution Melting Analysis or Next Generation Sequencing for the presence of point mutations and/or indels. Approximately 90% of these patients stil remain undiagnosed as they are BRCA1/2 negative. Multiplex ligation-dependent probe amplification was initially added to screen for copy number variation detection, but with the introduction of next generation sequencing in 2017, was substituted and is currently used as a confirmation assay. The aim was to investigate the viability of utilizing internationally designed copy number variation detection assays based on mostly European/Caucasian genomic data for use within a South African context. The multiplex ligation-dependent probe amplification technique is based on the hybridization and subsequent ligation of multiple probes to a targeted exon. The ligated probes are amplified using conventional polymerase chain reaction, followed by fragment analysis by means of capillary electrophoresis. The experimental design of the assay was performed according to the guidelines of MRC-Holland. For BRCA1 (P002-D1) and BRCA2 (P045-B3), both multiplex assays were validated, and results were confirmed using a secondary probe set for each gene. The next generation sequencing technique is based on target amplification via multiplex polymerase chain reaction, where after the amplicons are sequenced parallel on a semiconductor chip. Amplified read counts are visualized as relative copy numbers to determine the median of the absolute values of all pairwise differences. Various experimental parameters such as DNA quality, quantity, and signal intensity or read depth were verified using positive and negative patients previously tested internationally. DNA quality and quantity proved to be the critical factors during the verification of both assays. The quantity influenced the relative copy number frequency directly whereas the quality of the DNA and its salt concentration influenced denaturation consistency in both assays. Multiplex ligation-dependent probe amplification produced false positives due to ligation failure when ligation was inhibited due to a variant present within the ligation site. Next generation sequencing produced false positives due to read dropout when primer sequences did not meet optimal multiplex binding kinetics due to population variants in the primer binding site. The analytical sensitivity and specificity for the South African population have been proven. Verification resulted in repeatable reactions with regards to the detection of relative copy number differences. Both multiplex ligation-dependent probe amplification and next generation sequencing multiplex panels need to be optimized to accommodate South African polymorphisms present within the genetically diverse ethnic groups to reduce the false copy number variation positive rate and increase performance efficiency.

Keywords: familial breast cancer, multiplex ligation-dependent probe amplification, next generation sequencing, South Africa

Procedia PDF Downloads 212
107 Effects of School Culture and Curriculum on Gifted Adolescent Moral, Social, and Emotional Development: A Longitudinal Study of Urban Charter Gifted and Talented Programs

Authors: Rebekah Granger Ellis, Pat J. Austin, Marc P. Bonis, Richard B. Speaker, Jr.

Abstract:

Using two psychometric instruments, this study examined social and emotional intelligence and moral judgment levels of more than 300 gifted and talented high school students enrolled in arts-integrated, academic acceleration, and creative arts charter schools in an ethnically diverse large city in the southeastern United States. Gifted and talented individuals possess distinguishable characteristics; these frequently appear as strengths, but often serious problems accompany them. Although many gifted adolescents thrive in their environments, some struggle in their school and community due to emotional intensity, motivation and achievement issues, lack of peers and isolation, identification problems, sensitivity to expectations and feelings, perfectionism, and other difficulties. These gifted students endure and survive in school rather than flourish. Gifted adolescents face special intrapersonal, interpersonal, and environmental problems. Furthermore, they experience greater levels of stress, disaffection, and isolation than non-gifted individuals due to their advanced cognitive abilities. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of these adolescents. Numerous studies have researched moral, social, and emotional development in the areas of cognitive-developmental, psychoanalytic, and behavioral learning; however, in almost all cases, these three facets have been studied separately leading to many divergent theories. Additionally, various frameworks and models purporting to encourage the different socio-affective branches of development have been debated in curriculum theory, yet research is inconclusive on the effectiveness of these programs. Most often studied is the socio-affective domain, which includes development and regulation of emotions; empathy development; interpersonal relations and social behaviors; personal and gender identity construction; and moral development, thinking, and judgment. Examining development in these domains can provide insight into why some gifted and talented adolescents are not always successful in adulthood despite advanced IQ scores. Particularly whether emotional, social and moral capabilities of gifted and talented individuals are as advanced as their intellectual abilities and how these are related to each other. This mixed methods longitudinal study examined students in urban gifted and talented charter schools for (1) socio-affective development levels and (2) whether a particular environment encourages developmental growth. Research questions guiding the study: (1) How do academically and artistically gifted 10th and 11th grade students perform on psychological scales of social and emotional intelligence and moral judgment? Do they differ from the normative sample? Do gender differences exist among gifted students? (2) Do adolescents who attend distinctive gifted charter schools differ in developmental profiles? Students’ performances on psychometric instruments were compared over time and by program type. Assessing moral judgment (DIT-2) and socio-emotional intelligence (BarOn EQ-I: YV), participants took pre-, mid-, and post-tests during one academic school year. Quantitative differences in growth on these psychological scales (individuals and school-wide) were examined. If a school showed change, qualitative artifacts (culture, curricula, instructional methodology, stakeholder interviews) provided insight for environmental correlation.

Keywords: gifted and talented programs, moral judgment, social and emotional intelligence, socio-affective education

Procedia PDF Downloads 166
106 Gas-Phase Noncovalent Functionalization of Pristine Single-Walled Carbon Nanotubes with 3D Metal(II) Phthalocyanines

Authors: Vladimir A. Basiuk, Laura J. Flores-Sanchez, Victor Meza-Laguna, Jose O. Flores-Flores, Lauro Bucio-Galindo, Elena V. Basiuk

Abstract:

Noncovalent nanohybrid materials combining carbon nanotubes (CNTs) with phthalocyanines (Pcs) is a subject of increasing research effort, with a particular emphasis on the design of new heterogeneous catalysts, efficient organic photovoltaic cells, lithium batteries, gas sensors, field effect transistors, among other possible applications. The possibility of using unsubstituted Pcs for CNT functionalization is very attractive due to their very moderate cost and easy commercial availability. However, unfortunately, the deposition of unsubstituted Pcs onto nanotube sidewalls through the traditional liquid-phase protocols turns to be very problematic due to extremely poor solubility of Pcs. On the other hand, unsubstituted free-base H₂Pc phthalocyanine ligand, as well as many of its transition metal complexes, exhibit very high thermal stability and considerable volatility under reduced pressure, which opens the possibility for their physical vapor deposition onto solid surfaces, including nanotube sidewalls. In the present work, we show the possibility of simple, fast and efficient noncovalent functionalization of single-walled carbon nanotubes (SWNTs) with a series of 3d metal(II) phthalocyanines Me(II)Pc, where Me= Co, Ni, Cu, and Zn. The functionalization can be performed in a temperature range of 400-500 °C under moderate vacuum and requires about 2-3 h only. The functionalized materials obtained were characterized by means of Fourier-transform infrared (FTIR), Raman, UV-visible and energy-dispersive X-ray spectroscopy (EDS), scanning and transmission electron microscopy (SEM and TEM, respectively) and thermogravimetric analysis (TGA). TGA suggested that Me(II)Pc weight content is 30%, 17% and 35% for NiPc, CuPc, and ZnPc, respectively (CoPc exhibited anomalous thermal decomposition behavior). The above values are consistent with those estimated from EDS spectra, namely, of 24-39%, 27-36% and 27-44% for CoPc, CuPc, and ZnPc, respectively. A strong increase in intensity of D band in the Raman spectra of SWNT‒Me(II)Pc hybrids, as compared to that of pristine nanotubes, implies very strong interactions between Pc molecules and SWNT sidewalls. Very high absolute values of binding energies of 32.46-37.12 kcal/mol and the highest occupied and lowest unoccupied molecular orbital (HOMO and LUMO, respectively) distribution patterns, calculated with density functional theory by using Perdew-Burke-Ernzerhof general gradient approximation correlation functional in combination with the Grimme’s empirical dispersion correction (PBE-D) and the double numerical basis set (DNP), also suggested that the interactions between Me(II) phthalocyanines and nanotube sidewalls are very strong. The authors thank the National Autonomous University of Mexico (grant DGAPA-IN200516) and the National Council of Science and Technology of Mexico (CONACYT, grant 250655) for financial support. The authors are also grateful to Dr. Natalia Alzate-Carvajal (CCADET of UNAM), Eréndira Martínez (IF of UNAM) and Iván Puente-Lee (Faculty of Chemistry of UNAM) for technical assistance with FTIR, TGA measurements, and TEM imaging, respectively.

Keywords: carbon nanotubes, functionalization, gas-phase, metal(II) phthalocyanines

Procedia PDF Downloads 102
105 Cost-Conscious Treatment of Basal Cell Carcinoma

Authors: Palak V. Patel, Jessica Pixley, Steven R. Feldman

Abstract:

Introduction: Basal cell carcinoma (BCC) is the most common skin cancer worldwide and requires substantial resources to treat. When choosing between indicated therapies, providers consider their associated adverse effects, efficacy, cosmesis, and function preservation. The patient’s tumor burden, infiltrative risk, and risk of tumor recurrence are also considered. Treatment cost is often left out of these discussions. This can lead to financial toxicity, which describes the harm and quality of life reductions inflicted by high care costs. Methods: We studied the guidelines set forth by the American Academy of Dermatology for the treatment of BCC. A PubMed literature search was conducted to identify the costs of each recommended therapy. We discuss costs alongside treatment efficacy and side-effect profile. Results: Surgical treatment for BCC can be cost-effective if the appropriate treatment is selected for the presenting tumor. Curettage and electrodesiccation can be used in low-grade, low-recurrence tumors in aesthetically unimportant areas. The benefits of cost-conscious care are not likely to be outweighed by the risks of poor cosmesis or tumor return ($471 BCC of the cheek). When tumor burden is limited, MMS offers better cure rates and lower recurrence rates than surgical excision, and with comparable costs (MMS $1263; SE $949). Surgical excision with permanent sections may be indicated when tumor burden is more extensive or if molecular testing is necessary. The utility of surgical excision with frozen sections, which costs substantially more than MMS without comparable outcomes, is less clear (SE with frozen sections $2334-$3085). Less data exists on non-surgical treatments for BCC. These techniques cost less, but recurrence-risk is high. Side-effects of nonsurgical treatment are limited to local skin reactions, and cosmesis is good. Cryotherapy, 5-FU, and MAL-PDT are all more affordable than surgery, but high recurrence rates increase risk of secondary financial and psychosocial burden (recurrence rates 21-39%; cost $100-270). Radiation therapy offers better clearance rates than other nonsurgical treatments but is associated with similar recurrence rates and a significantly larger financial burden ($2591-$3460 BCC of the cheek). Treatments for advanced or metastatic BCC are extremely costly, but few patients require their use, and the societal cost burden remains low. Vismodegib and sonidegib have good response rates but substantial side effects, and therapy should be combined with multidisciplinary care and palliative measures. Expert-review has found sonidegib to be the less expensive and more efficacious option (vismodegib $128,358; sonidegib $122,579). Platinum therapy, while not FDA-approved, is also effective but expensive (~91,435). Immunotherapy offers a new line of treatment in patients intolerant of hedgehog inhibitors ($683,061). Conclusion: Dermatologists working within resource-compressed practices and with resource-limited patients must prudently manage the healthcare dollar. Surgical therapies for BCC offer the lowest risk of recurrence at the most reasonable cost. Non-surgical therapies are more affordable, but high recurrence rates increase the risk of secondary financial and psychosocial burdens. Treatments for advanced BCC are incredibly costly, but the low incidence means the overall cost to the system is low.

Keywords: nonmelanoma skin cancer, basal cell skin cancer, squamous cell skin cancer, cost of care

Procedia PDF Downloads 105
104 An Interdisciplinary Maturity Model for Accompanying Sustainable Digital Transformation Processes in a Smart Residential Quarter

Authors: Wesley Preßler, Lucie Schmidt

Abstract:

Digital transformation is playing an increasingly important role in the development of smart residential quarters. In order to accompany and steer this process and ultimately make the success of the transformation efforts measurable, it is helpful to use an appropriate maturity model. However, conventional maturity models for digital transformation focus primarily on the evaluation of processes and neglect the information and power imbalances between the stakeholders, which affects the validity of the results. The Multi-Generation Smart Community (mGeSCo) research project is developing an interdisciplinary maturity model that integrates the dimensions of digital literacy, interpretive patterns, and technology acceptance to address this gap. As part of the mGeSCo project, the technological development of selected dimensions in the Smart Quarter Jena-Lobeda (Germany) is being investigated. A specific maturity model, based on Cohen's Smart Cities Wheel, evaluates the central dimensions Working, Living, Housing and Caring. To improve the reliability and relevance of the maturity assessment, the factors Digital Literacy, Interpretive Patterns and Technology Acceptance are integrated into the developed model. The digital literacy dimension examines stakeholders' skills in using digital technologies, which influence their perception and assessment of technological maturity. Digital literacy is measured by means of surveys, interviews, and participant observation, using the European Commission's Digital Literacy Framework (DigComp) as a basis. Interpretations of digital technologies provide information about how individuals perceive technologies and ascribe meaning to them. However, these are not mere assessments, prejudices, or stereotyped perceptions but collective patterns, rules, attributions of meaning and the cultural repertoire that leads to these opinions and attitudes. Understanding these interpretations helps in assessing the overarching readiness of stakeholders to digitally transform a/their neighborhood. This involves examining people's attitudes, beliefs, and values about technology adoption, as well as their perceptions of the benefits and risks associated with digital tools. These insights provide important data for a holistic view and inform the steps needed to prepare individuals in the neighborhood for a digital transformation. Technology acceptance is another crucial factor for successful digital transformation to examine the willingness of individuals to adopt and use new technologies. Surveys or questionnaires based on Davis' Technology Acceptance Model can be used to complement interpretive patterns to measure neighborhood acceptance of digital technologies. Integrating the dimensions of digital literacy, interpretive patterns and technology acceptance enables the development of a roadmap with clear prerequisites for initiating a digital transformation process in the neighborhood. During the process, maturity is measured at different points in time and compared with changes in the aforementioned dimensions to ensure sustainable transformation. Participation, co-creation, and co-production are essential concepts for a successful and inclusive digital transformation in the neighborhood context. This interdisciplinary maturity model helps to improve the assessment and monitoring of sustainable digital transformation processes in smart residential quarters. It enables a more comprehensive recording of the factors that influence the success of such processes and supports the development of targeted measures to promote digital transformation in the neighborhood context.

Keywords: digital transformation, interdisciplinary, maturity model, neighborhood

Procedia PDF Downloads 51
103 Online Factorial Experimental Study Testing the Effectiveness of Pictorial Waterpipe-specific Health Warning Labels Compared with Text-only Labels in the United States of America

Authors: Taghrid Asfar, Olusanya J. Oluwole, Michael Schmidt, Alejandra Casas, Zoran Bursac, Wasim Maziak.

Abstract:

Waterpipe (WP) smoking (a.k.a. hookah) has increased dramatically in the US mainly due to the misperception that it is safer than cigarette smoking. Mounting evidence show that WP smoking is addictive and harmful. Health warning labels (HWLs) are effective in communicating smoking-related risks. Currently, the FDA requires that WP tobacco packages have a textual HWL about nicotine. While this represents a good step, it is inadequate given the established harm of WP smoking beyond addiction and the superior performance of pictorial HWLs over text-only ones. We developed 24 WP pictorial HWLs in a Delphi study among international expert panel. HWLs were grouped into 6 themes: addiction, harm compared to cigarettes, harm to others, health effects, quitting, and specific harms. This study aims to compare the effect of the pictorial HWLs compared to the FDA HWL, and 2) the effect of pictorial HWLs between the 6 themes. A 2x7 between/within subject online factorial experimental study was conducted among a national convenience sample of 300 (50% current WP smokers; 50% nonsmokers) US adults (females 71.1%; mean age of 31.1±3.41 years) in March 2022. The first factor varied WP smoking status (smokers, nonsmokers). The second factor varied the HWL theme and type (text, pictorial). Participants were randomized to view and rate 7 HWLs: 1 FDA text HWL (control) and 6 HWLs, one from each of the 6 themes, all presented in random order. HWLs were rated based on the message impact framework into five categories: attention, reaction (believability, relevance, fear), perceived effectiveness, intentions to quit WP among current smokers, and intention to not initiate WP among nonsmokers. measures were assessed on a 5-point Likert scale (1=not at all to 5=very much) for attention and reaction and on a 7-point Likert scale (1=not at all to 7=very much) for the perceived effectiveness and intentions to quit or not initiate WP smoking. Means and SDs of outcome measures for each HWL type and theme were calculated. Planned comparisons using Friedman test followed by pairwise Wilcoxon signed-rank test for multiple comparisons were used to examine distributional differences of outcomes between the HWL type and themes. Approximately 74.4 % of participants were non-Hispanic Whites, 68.4% had college degrees, and 41.5% were under the poverty level. Participants reported starting WTS on average at 20.3±8.19 years. Compared with the FDA text HWL, pictorial HWLs elicited higher attention (p<0.0001), fear (p<0.0001), harm perception (p<0.0003), perceived effectiveness (p<0.0001), and intentions to quit (p=0.0014) and not initiate WP smoking (p<0.0003). HWLs in theme 3 (harm to others) achieved the highest rating in attention (4.14±1), believability (4.15±0.95), overall perceived effectiveness (7.60±2.35), harm perception (7.53±2.43), and intentions to quit (7.35±2.57). HWLs in theme 2 (WP harm compared to cigarettes) achieved the highest rating in discouraging WP smoking initiation (7.32±2.54). Pictorial HWLs were superior to the FDA text-only for several communication outcomes. Pictorial HWLs related to WP harm to others and WP harm compared to cigarette are promising. These findings provide strong evidence for the potential implementation of WP-specific pictorial HWLs.

Keywords: health communication, waterpipe smoking, factorial experiment, reaction, harm perception, tobacco regulations

Procedia PDF Downloads 89
102 Production Factor Coefficients Transition through the Lens of State Space Model

Authors: Kanokwan Chancharoenchai

Abstract:

Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.

Keywords: autoregressive model, economic growth, state space model, Thailand

Procedia PDF Downloads 129
101 A Longitudinal Exploration into Computer-Mediated Communication Use (CMC) and Relationship Change between 2005-2018

Authors: Laurie Dempsey

Abstract:

Relationships are considered to be beneficial for emotional wellbeing, happiness and physical health. However, they are also complicated: individuals engage in a multitude of complex and volatile relationships during their lifetime, where the change to or ending of these dynamics can be deeply disruptive. As the internet is further integrated into everyday life and relationships are increasingly mediated, Media Studies’ and Sociology’s research interests intersect and converge. This study longitudinally explores how relationship change over time corresponds with the developing UK technological landscape between 2005-2018. Since the early 2000s, the use of computer-mediated communication (CMC) in the UK has dramatically reshaped interaction. Its use has compelled individuals to renegotiate how they consider their relationships: some argue it has allowed for vast networks to be accumulated and strengthened; others contend that it has eradicated the core values and norms associated with communication, damaging relationships. This research collaborated with UK media regulator Ofcom, utilising the longitudinal dataset from their Adult Media Lives study to explore how relationships and CMC use developed over time. This is a unique qualitative dataset covering 2005-2018, where the same 18 participants partook in annual in-home filmed depth interviews. The interviews’ raw video footage was examined year-on-year to consider how the same people changed their reported behaviour and outlooks towards their relationships, and how this coincided with CMC featuring more prominently in their everyday lives. Each interview was transcribed, thematically analysed and coded using NVivo 11 software. This study allowed for a comprehensive exploration into these individuals’ changing relationships over time, as participants grew older, experienced marriages or divorces, conceived and raised children, or lost loved ones. It found that as technology developed between 2005-2018, everyday CMC use was increasingly normalised and incorporated into relationship maintenance. It played a crucial role in altering relationship dynamics, even factoring in the breakdown of several ties. Three key relationships were identified as being shaped by CMC use: parent-child; extended family; and friendships. Over the years there were substantial instances of relationship conflict: for parents renegotiating their dynamic with their child as they tried to both restrict and encourage their child’s technology use; for estranged family members ‘forced’ together in the online sphere; and for friendships compelled to publicly display their relationship on social media, for fear of social exclusion. However, it was also evident that CMC acted as a crucial lifeline for these participants, providing opportunities to strengthen and maintain their bonds via previously unachievable means, both over time and distance. A longitudinal study of this length and nature utilising the same participants does not currently exist, thus provides crucial insight into how and why relationship dynamics alter over time. This unique and topical piece of research draws together Sociology and Media Studies, illustrating how the UK’s changing technological landscape can reshape one of the most basic human compulsions. This collaboration with Ofcom allows for insight that can be utilised in both academia and policymaking alike, making this research relevant and impactful across a range of academic fields and industries.

Keywords: computer mediated communication, longitudinal research, personal relationships, qualitative data

Procedia PDF Downloads 104