Search results for: complex low-rise building
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8884

Search results for: complex low-rise building

1534 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 311
1533 Lean Implementation in a Nurse Practitioner Led Pediatric Primary Care Clinic: A Case Study

Authors: Lily Farris, Chantel E. Canessa, Rena Heathcote, Susan Shumay, Suzanna V. McRae, Alissa Collingridge, Minna K. Miller

Abstract:

Objective: To describe how the Lean approach can be applied to improve access, quality and safety of care in an ambulatory pediatric primary care setting. Background: Lean was originally developed by Toyota manufacturing in Japan, and subsequently adapted for use in the healthcare sector. Lean is a systematic approach, focused on identifying and reducing waste within organizational processes, improving patient-centered care and efficiency. Limited literature is available on the implementation of the Lean methodologies in a pediatric ambulatory care setting. Methods: A strategic continuous improvement event or Rapid Process Improvement Workshop (RPIW) was launched with the aim evaluating and structurally supporting clinic workflow, capacity building, sustainability, and ultimately improving access to care and enhancing the patient experience. The Lean process consists of five specific activities: Current state/process assessment (value stream map); development of a future state map (value stream map after waste reduction); identification, quantification and prioritization of the process improvement opportunities; implementation and evaluation of process changes; and audits to sustain the gains. Staff engagement is a critical component of the Lean process. Results: Through the implementation of the RPIW and shifting workload among the administrative team, four hours of wasted time moving between desks and doing work was eliminated from the Administrative Clerks role. To streamline clinic flow, the Nursing Assistants completed patient measurements and vitals for Nurse Practitioners, reducing patient wait times and adding value to the patients visit with the Nurse Practitioners. Additionally, through the Nurse Practitioners engagement in the Lean processes a need was recognized to articulate clinic vision, mission and the alignment of NP role and scope of practice with the agency and Ministry of Health strategic plan. Conclusions: Continuous improvement work in the Pediatric Primary Care NP Clinic has provided a unique opportunity to improve the quality of care delivered and has facilitated further alignment of the daily continuous improvement work with the strategic priorities of the Ministry of Health.

Keywords: ambulatory care, lean, pediatric primary care, system efficiency

Procedia PDF Downloads 297
1532 Lessons Learned from Push-Plus Implementation in Northern Nigeria

Authors: Aisha Giwa, Mohammed-Faosy Adeniran, Olufunke Femi-Ojo

Abstract:

Four decades ago, the World Health Organization (WHO) launched the Expanded Programme on Immunization (EPI). The EPI blueprint laid out the technical and managerial functions necessary to routinely vaccinate children with a limited number of vaccines, providing protection against diphtheria, tetanus, whooping cough, measles, polio, and tuberculosis, and to prevent maternal and neonatal tetanus by vaccinating women of childbearing age with tetanus toxoid. Despite global efforts, the Routine Immunization (RI) coverage in two of the World Health Organization (WHO) regions; the African Region and the South-East Asia Region, still remains short of its targets. As a result, the WHO Regional Director for Africa declared 2012 as the year for intensifying RI in these regions and this also coincided with the declaration of polio as a programmatic emergency by the WHO Executive Board. In order to intensify routine immunization, the National Routine Immunization Strategic Plan (2013-2015) stated that its core priority is to ensure 100% adequacy and availability of vaccines for safe immunization. To achieve 100% availability, the “PUSH System” and then “Push-Plus” were adopted for vaccine distribution, which replaced the inefficient “PULL” method. The NPHCDA plays the key role in coordinating activities in area advocacy, capacity building, engagement of 3PL for the state as well as monitoring and evaluation of the vaccine delivery process. eHealth Africa (eHA) is a player as a 3PL service provider engaged by State Primary Health Care Boards (SPHCDB) to ensure vaccine availability through Vaccine Direct Delivery (VDD) project which is essential to successful routine immunization services. The VDD project ensures the availability and adequate supply of high-quality vaccines and immunization-related materials to last-mile facilities. eHA’s commitment to the VDD project saw the need for an assessment of the project vis-a-vis the overall project performance, evaluation of a process for necessary improvement suggestions as well as general impact across Kano State (Where eHA had transitioned to the state), Bauchi State (currently manage delivery to all LGAs except 3 LGAs currently being managed by the state), Sokoto State (eHA currently covers all LGAs) and Zamfara State (Currently, in-sourced and managed solely by the state).

Keywords: cold chain logistics, health supply chain system strengthening, logistics management information system, vaccine delivery traceability and accountability

Procedia PDF Downloads 300
1531 Food Foam Characterization: Rheology, Texture and Microstructure Studies

Authors: Rutuja Upadhyay, Anurag Mehra

Abstract:

Solid food foams/cellular foods are colloidal systems which impart structure, texture and mouthfeel to many food products such as bread, cakes, ice-cream, meringues, etc. Their heterogeneous morphology makes the quantification of structure/mechanical relationships complex. The porous structure of solid food foams is highly influenced by the processing conditions, ingredient composition, and their interactions. Sensory perceptions of food foams are dependent on bubble size, shape, orientation, quantity and distribution and determines the texture of foamed foods. The state and structure of the solid matrix control the deformation behavior of the food, such as elasticity/plasticity or fracture, which in turn has an effect on the force-deformation curves. The obvious step in obtaining the relationship between the mechanical properties and the porous structure is to quantify them simultaneously. Here, we attempt to research food foams such as bread dough, baked bread and steamed rice cakes to determine the link between ingredients and the corresponding effect of each of them on the rheology, microstructure, bubble size and texture of the final product. Dynamic rheometry (SAOS), confocal laser scanning microscopy, flatbed scanning, image analysis and texture profile analysis (TPA) has been used to characterize the foods studied. In all the above systems, there was a common observation that when the mean bubble diameter is smaller, the product becomes harder as evidenced by the increase in storage and loss modulus (G′, G″), whereas when the mean bubble diameter is large the product is softer with decrease in moduli values (G′, G″). Also, the bubble size distribution affects texture of foods. It was found that bread doughs with hydrocolloids (xanthan gum, alginate) aid a more uniform bubble size distribution. Bread baking experiments were done to study the rheological changes and mechanisms involved in the structural transition of dough to crumb. Steamed rice cakes with xanthan gum (XG) addition at 0.1% concentration resulted in lower hardness with a narrower pore size distribution and larger mean pore diameter. Thus, control of bubble size could be an important parameter defining final food texture.

Keywords: food foams, rheology, microstructure, texture

Procedia PDF Downloads 330
1530 Examination of the South African Fire Legislative Framework

Authors: Mokgadi Julia Ngoepe-Ntsoane

Abstract:

The article aims to make a case for a legislative framework for the fire sector in South Africa. Robust legislative framework is essential for empowering those with obligatory mandate within the sector. This article contributes to the body of knowledge in the field of policy reviews particularly with regards to the legal framework. It has been observed overtime that the scholarly contributions in this field are limited. Document analysis was the methodology selected for the investigation of the various legal frameworks existing in the country. It has been established that indeed the national legislation on the fire industry does not exist in South Africa. From the documents analysed, it was revealed that the sector is dominated by cartels who are exploiting the new entrants to the market particularly SMEs. It is evident that these cartels are monopolising the system as they have long been operating in the system turning it into self- owned entities. Commitment to addressing the challenges faced by fire services and creating a framework for the evolving role that fire brigade services are expected to execute in building safer and sustainable communities is vital. Legislation for the fire sector ought to be concluded with immediate effect. The outdated national fire legislation has necessitated the monopolisation and manipulation of the system by dominating organisations which cause a painful discrimination and exploitation of smaller service providers to enter the market for trading in that occupation. The barrier to entry bears long term negative effects on national priority areas such as employment creation, poverty, and others. This monopolisation and marginalisation practices by cartels in the sector calls for urgent attention by government because if left attended, it will leave a lot of people particularly women and youth being disadvantaged and frustrated. The downcast syndrome exercised within the fire sector has wreaked havoc and is devastating. This is caused by cartels that have been within the sector for some time, who know the strengths and weaknesses of processes, shortcuts, advantages and consequences of various actions. These people take advantage of new entrants to the sector who in turn find it difficult to manoeuvre, find the market dissonant and end up giving up their good ideas and intentions. There are many pieces of legislation which are industry specific such as housing, forestry, agriculture, health, security, environmental which are used to regulate systems within the institutions involved. Other regulations exist as bi-laws for guiding the management within the municipalities.

Keywords: sustainable job creation, growth and development, transformation, risk management

Procedia PDF Downloads 169
1529 Macroeconomic Implications of Artificial Intelligence on Unemployment in Europe

Authors: Ahmad Haidar

Abstract:

Modern economic systems are characterized by growing complexity, and addressing their challenges requires innovative approaches. This study examines the implications of artificial intelligence (AI) on unemployment in Europe from a macroeconomic perspective, employing data modeling techniques to understand the relationship between AI integration and labor market dynamics. To understand the AI-unemployment nexus comprehensively, this research considers factors such as sector-specific AI adoption, skill requirements, workforce demographics, and geographical disparities. The study utilizes a panel data model, incorporating data from European countries over the last two decades, to explore the potential short-term and long-term effects of AI implementation on unemployment rates. In addition to investigating the direct impact of AI on unemployment, the study also delves into the potential indirect effects and spillover consequences. It considers how AI-driven productivity improvements and cost reductions might influence economic growth and, in turn, labor market outcomes. Furthermore, it assesses the potential for AI-induced changes in industrial structures to affect job displacement and creation. The research also highlights the importance of policy responses in mitigating potential negative consequences of AI adoption on unemployment. It emphasizes the need for targeted interventions such as skill development programs, labor market regulations, and social safety nets to enable a smooth transition for workers affected by AI-related job displacement. Additionally, the study explores the potential role of AI in informing and transforming policy-making to ensure more effective and agile responses to labor market challenges. In conclusion, this study provides a comprehensive analysis of the macroeconomic implications of AI on unemployment in Europe, highlighting the importance of understanding the nuanced relationships between AI adoption, economic growth, and labor market outcomes. By shedding light on these relationships, the study contributes valuable insights for policymakers, educators, and researchers, enabling them to make informed decisions in navigating the complex landscape of AI-driven economic transformation.

Keywords: artificial intelligence, unemployment, macroeconomic analysis, european labor market

Procedia PDF Downloads 70
1528 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System

Authors: Abisha Isaac Mohanlal

Abstract:

Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.

Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery

Procedia PDF Downloads 221
1527 Unspoken Playground Rules Prompt Adolescents to Avoid Physical Activity: A Focus Group Study of Constructs in the Prototype Willingness Model

Authors: Catherine Wheatley, Emma L. Davies, Helen Dawes

Abstract:

The health benefits of exercise are widely recognised, but numerous interventions have failed to halt a sharp decline in physical activity during early adolescence. Many such projects are underpinned by the Theory of Planned Behaviour, yet this model of rational decision-making leaves variance in behavior unexplained. This study investigated whether the Prototype Willingness Model, which proposes a second, reactive decision-making path to account for spontaneous responses to the social environment, has potential to improve understanding of adolescent exercise behaviour in school by exploring constructs in the model with young people. PE teachers in 4 Oxfordshire schools each nominated 6 pupils who were active in school, and 6 who were inactive, to participate in the study. Of these, 45 (22 male) aged 12-13 took part in 8 focus group discussions. These were transcribed and subjected to deductive thematic analysis to search for themes relating to the prototype willingness model. Participants appeared to make rational decisions about commuting to school or attending sports clubs, but spontaneous choices to be inactive during both break and PE. These reactive decisions seemed influenced by a social context described as more ‘judgmental’ than primary school, characterised by anxiety about physical competence, negative peer evaluation and inactive playground norms. Participants described their images of typical active and inactive adolescents: active images included negative social characteristics including ‘show-off’. There was little concern about the long-term risks of inactivity, although participants seemed to recognise that physical activity is healthy. The Prototype Willingness Model might more fully explain young adolescents’ physical activity in school than rational behavioural models, indicating potential for physical activity interventions that target social anxieties in response to the changing playground environment. Images of active types could be more complex than earlier research has suggested, and their negative characteristics might influence willingness to be active.

Keywords: adolescence, physical activity, prototype willingness model, school

Procedia PDF Downloads 345
1526 Knowledge Co-Production on Future Climate-Change-Induced Mass-Movement Risks in Alpine Regions

Authors: Elisabeth Maidl

Abstract:

The interdependence of climate change and natural hazard goes along with large uncertainties regarding future risks. Regional stakeholders, experts in natural hazards management and scientists have specific knowledge, resp. mental models on such risks. This diversity of views makes it difficult to find common and broadly accepted prevention measures. If the specific knowledge of these types of actors is shared in an interactive knowledge production process, this enables a broader and common understanding of complex risks and allows to agree on long-term solution strategies. Previous studies on mental models confirm that actors with specific vulnerabilities perceive different aspects of a topic and accordingly prefer different measures. In bringing these perspectives together, there is the potential to reduce uncertainty and to close blind spots in solution finding. However, studies that examine the mental models of regional actors on future concrete mass movement risks are lacking so far. The project tests and evaluates the feasibility of knowledge co-creation for the anticipatory prevention of climate change-induced mass movement risks in the Alps. As a key element, mental models of the three included groups of actors are compared. Being integrated into the research program Climate Change Impacts on Alpine Mass Movements (CCAMM2), this project is carried out in two Swiss mountain regions. The project is structured in four phases: 1) the preparatory phase, in which the participants are identified, 2) the baseline phase, in which qualitative interviews and a quantitative pre-survey are conducted with actors 3) the knowledge-co-creation phase, in which actors have a moderated exchange meeting, and a participatory modelling workshop on specific risks in the region, and 4) finally a public information event. Results show that participants' mental models are based on the place of origin, profession, believes, values, which results in narratives on climate change and hazard risks. Further, the more intensively participants interact with each other, the more likely is that they change their views. This provides empirical evidence on how changes in opinions and mindsets can be induced and fostered.

Keywords: climate change, knowledge-co-creation, participatory process, natural hazard risks

Procedia PDF Downloads 66
1525 Time to Second Line Treatment Initiation Among Drug-Resistant Tuberculosis Patients in Nepal

Authors: Shraddha Acharya, Sharad Kumar Sharma, Ratna Bhattarai, Bhagwan Maharjan, Deepak Dahal, Serpahine Kaminsa

Abstract:

Background: Drug-resistant (DR) tuberculosis (TB) continues to be a threat in Nepal, with an estimated 2800 new cases every year. The treatment of DR-TB with second line TB drugs is complex and takes longer time with comparatively lower treatment success rate than drug-susceptible TB. Delay in treatment initiation for DR-TB patients might further result in unfavorable treatment outcomes and increased transmission. This study thus aims to determine median time taken to initiate second-line treatment among Rifampicin Resistant (RR) diagnosed TB patients and to assess the proportion of treatment delays among various type of DR-TB cases. Method: A retrospective cohort study was done using national routine electronic data (DRTB and TB Laboratory Patient Tracking System-DHIS2) on drug resistant tuberculosis patients between January 2020 and December 2022. The time taken for treatment initiation was computed as– days from first diagnosis as RR TB through Xpert MTB/Rif test to enrollment on second-line treatment. The treatment delay (>7 days after diagnosis) was calculated. Results: Among total RR TB cases (N=954) diagnosed via Xpert nationwide, 61.4% were enrolled under shorter-treatment regimen (STR), 33.0% under longer treatment regimen (LTR), 5.1% for Pre-extensively drug resistant TB (Pre-XDR) and 0.4% for Extensively drug resistant TB (XDR) treatment. Among these cases, it was found that the median time from diagnosis to treatment initiation was 6 days (IQR:2-15.8). The median time was 5 days (IQR:2.0-13.3) among STR, 6 days (IQR:3.0-15.0) among LTR, 30 days (IQR:5.5-66.8) among Pre-XDR and 4 days (IQR:2.5-9.0) among XDR TB cases. The overall treatment delay (>7 days after diagnosis) was observed in 42.4% of the patients, among which, cases enrolled under Pre-XDR contributed substantially to treatment delay (72.0%), followed by LTR (43.6%), STR (39.1%) and XDR (33.3%). Conclusion: Timely diagnosis and prompt treatment initiation remain fundamental focus of the National TB program. The findings of the study, however suggest gaps in timeliness of treatment initiation for the drug-resistant TB patients, which could bring adverse treatment outcomes. Moreover, there is an alarming delay in second line treatment initiation for the Pre-XDR TB patients. Therefore, this study generates evidence to identify existing gaps in treatment initiation and highlights need for formulating specific policies and intervention in creating effective linkage between the RR TB diagnosis and enrollment on second line TB treatment with intensified efforts from health providers for follow-ups and expansion of more decentralized, adequate, and accessible diagnostic and treatment services for DR-TB, especially Pre-XDR TB cases, due to the observed long treatment delays.

Keywords: drug-resistant, tuberculosis, treatment initiation, Nepal, treatment delay

Procedia PDF Downloads 78
1524 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)

Authors: Azimollah Aleshzadeh, Enver Vural Yavuz

Abstract:

The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.

Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping

Procedia PDF Downloads 130
1523 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 350
1522 Harnessing the Power of Mixed Ligand Complexes: Enhancing Antimicrobial Activities with Thiosemicarbazones

Authors: Sakshi Gupta, Seema Joshi

Abstract:

Thiosemicarbazones (TSCs) have garnered significant attention in coordination chemistry due to their versatile coordination modes and pharmacological properties. Mixed ligand complexes of TSCs represent a promising area of research, offering enhanced antimicrobial activities compared to their parent compounds. This review provides an overview of the synthesis, characterization, and antimicrobial properties of mixed ligand complexes incorporating thiosemicarbazones. The synthesis of mixed ligand complexes typically involves the reaction of a metal salt with TSC ligands and additional ligands, such as nitrogen- or oxygen-based ligands. Various transition metals, including copper, nickel, and cobalt, have been employed to form mixed ligand complexes with TSCs. Characterization techniques such as spectroscopy, X-ray crystallography, and elemental analysis are commonly utilized to confirm the structures of these complexes. One of the key advantages of mixed ligand complexes is their enhanced antimicrobial activity compared to pure TSC compounds. The synergistic effect between the TSC ligands and additional ligands contributes to increased efficacy, possibly through improved metal-ligand interactions or enhanced membrane permeability. Furthermore, mixed ligand complexes offer the potential for selective targeting of microbial species while minimizing toxicity to mammalian cells. This selectivity arises from the specific interactions between the metal center, TSC ligands, and biological targets within microbial cells. Such targeted antimicrobial activity is crucial for developing effective treatments with minimal side effects. Moreover, the versatility of mixed ligand complexes allows for the design of tailored antimicrobial agents with optimized properties. By varying the metal ion, TSC ligands, and additional ligands, researchers can fine-tune the physicochemical properties and biological activities of these complexes. This tunability opens avenues for the development of novel antimicrobial agents with improved efficacy and reduced resistance. In conclusion, mixed ligand complexes of thiosemicarbazones represent a promising class of compounds with potent antimicrobial activities. Further research in this field holds great potential for the development of novel therapeutic agents to combat microbial infections effectively.

Keywords: metal complex, thiosemicarbazones, mixed ligand, selective targeting, antimicrobial activity

Procedia PDF Downloads 55
1521 Petrogenesis and Tectonic Implication of the Oligocene Na-Rich Granites from the North Sulawesi Arc, Indonesia

Authors: Xianghong Lu, Yuejun Wang, Chengshi Gan, Xin Qian

Abstract:

The North Sulawesi Arc, located on the east of Indonesia and to the south of the Celebes Sea, is the north part of the K-shape of Sulawesi Island and has a complex tectonic history since the Cenozoic due to the convergence of three plates (Eurasia, India-Australia and Pacific plates). Published rock records contain less precise chronology, mostly using K-Ar dating, and rare geochemistry data, which limit the understanding of the regional tectonic setting. This study presents detailed zircon U-Pb geochronological and Hf-O isotope and whole-rock geochemical analyses for the Na-rich granites from the North Sulawesi Arc. Zircon U-Pb geochronological analyses of three representative samples yield weighted mean ages of 30.4 ± 0.4 Ma, 29.5 ± 0.2 Ma, and 27.3 ± 0.4 Ma, respectively, revealing the Oligocene magmatism in the North Sulawesi Arc. The samples have high Na₂O and low K₂O contents with high Na₂O/K₂O ratios, belonging to Low-K tholeiitic Na-rich granites. The Na-rich granites are characterized by high SiO₂ contents (75.05-79.38 wt.%) and low MgO contents (0.07-0.91 wt.%) and show arc-like trace elemental signatures. They have low (⁸⁷Sr/⁸⁶Sr)i ratios (0.7044-0.7046), high εNd(t) values (from +5.1 to +6.6), high zircon εHf(t) values (from +10.1 to +18.8) and low zircon δ18O values (3.65-5.02). They show an Indian-Ocean affinity of Pb isotopic compositions with ²⁰⁶Pb/²⁰⁴Pb ratio of 18.16-18.37, ²⁰⁷Pb/²⁰⁴Pb ratio of 15.56-15.62, and ²⁰⁸Pb/²⁰⁴Pb ratio of 38.20-38.66. These geochemical signatures suggest that the Oligocene Na-rich granites from the North Sulawesi Arc formed by partial melting of the juvenile oceanic crust with sediment-derived fluid-related metasomatism in a subducting setting and support an intra-oceanic arc origin. Combined with the published study, the emergence of extensive calc-alkaline felsic arc magmatism can be traced back to the Early Oligocene period, subsequent to the Eocene back-arc basalts (BAB) that share similarity with the Celebes Sea basement. Since the opening of the Celebes Sea started from the Eocene (42~47 Ma) and stopped by the Early Oligocene (~32 Ma), the geodynamical mechanism of the formation of the Na-rich granites from the North Sulawesi Arc during the Oligocene might relate to the subduction of the Indian Ocean.

Keywords: North Sulawesi Arc, oligocene, Na-rich granites, in-situ zircon Hf–O analysis, intra-oceanic origin

Procedia PDF Downloads 71
1520 Development of a Framework for Assessment of Market Penetration of Oil Sands Energy Technologies in Mining Sector

Authors: Saeidreza Radpour, Md. Ahiduzzaman, Amit Kumar

Abstract:

Alberta’s mining sector consumed 871.3 PJ in 2012, which is 67.1% of the energy consumed in the industry sector and about 40% of all the energy consumed in the province of Alberta. Natural gas, petroleum products, and electricity supplied 55.9%, 20.8%, and 7.7%, respectively, of the total energy use in this sector. Oil sands mining and upgrading to crude oil make up most of the mining energy sector activities in Alberta. Crude oil is produced from the oil sands either by in situ methods or by the mining and extraction of bitumen from oil sands ore. In this research, the factors affecting oil sands production have been assessed and a framework has been developed for market penetration of new efficient technologies in this sector. Oil sands production amount is a complex function of many different factors, broadly categorized into technical, economic, political, and global clusters. The results of developed and implemented statistical analysis in this research show that the importance of key factors affecting on oil sands production in Alberta is ranked as: Global energy consumption (94% consistency), Global crude oil price (86% consistency), and Crude oil export (80% consistency). A framework for modeling oil sands energy technologies’ market penetration (OSETMP) has been developed to cover related technical, economic and environmental factors in this sector. It has been assumed that the impact of political and social constraints is reflected in the model by changes of global oil price or crude oil price in Canada. The market share of novel in situ mining technologies with low energy and water use are assessed and calculated in the market penetration framework include: 1) Partial upgrading, 2) Liquid addition to steam to enhance recovery (LASER), 3) Solvent-assisted process (SAP), also called solvent-cyclic steam-assisted gravity drainage (SC-SAGD), 4) Cyclic solvent, 5) Heated solvent, 6) Wedge well, 7) Enhanced modified steam and Gas push (emsagp), 8) Electro-thermal dynamic stripping process (ET-DSP), 9) Harris electro-magnetic heating applications (EMHA), 10) Paraffin froth separation. The results of the study will show the penetration profile of these technologies over a long term planning horizon.

Keywords: appliances efficiency improvement, diffusion models, market penetration, residential sector

Procedia PDF Downloads 326
1519 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor

Authors: Hao Yan, Xiaobing Zhang

Abstract:

The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.

Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model

Procedia PDF Downloads 87
1518 The Development of Documentary Filmmaking in Early Independent India

Authors: Camille Deprez

Abstract:

This paper proposes to present research findings of an ongoing Hong Kong government-funded project on ‘The Documentary Film in India (1948-1975)’ (GRF 1240314), for which an extensive research fieldwork has been carried out in various archives in India. This project investigates the role and significance of the Indian documentary film sector from the inauguration of the state-sponsored Films Division one year after independence in 1948 until the declaration of a ‘State of Emergency’ in 1975. The documentary film production of this first period of national independence was characterised by increasing formal experimentation and analytical social and political enquiry, and by a complex, mixed structure of state-sponsored monopoly and free-market operation. However, that production remains significantly under-researched. What were the main production, distribution and exhibition strategies over this period? What were the recurrent themes and stylistic features of the films produced? In the new context of national independence (in which the State considered film as means of mass persuasion), consolidation of the commercial film, and the emergence of television and art cinema, what role did official, professional and creative factors play in the development of the documentary film sector? What were the impact of such films and the challenges faced by the documentary film in India? Based upon the crossed-analysis of primary written research documents, interviews and relevant films, this study interweaves empirical study of the sector's financing, production, distribution and exhibition strategies, as well as the films' content and form, with the larger historical context of India over the period from 1948 to 1975. Whilst most of the films made within the sector explored social issues, they were rarely able to do so from an overtly critical perspective. However, this paper proposes to analyse the contribution of important filmmakers and producers, including Ezra Mir, Paul Zils, Jean Bhownagary, S. Sukhdev, S. N. S. Sastri, and P. Pati, to the development of the Indian documentary film sector and style within and outside the remits of Films Division. It will more specifically assess the extent to which they criticised the State, showed the inequalities in Indian society and explored film form.

Keywords: documentary film, film archives, film history, India

Procedia PDF Downloads 294
1517 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data

Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene

Abstract:

Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.

Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging

Procedia PDF Downloads 267
1516 Improving Functionality of Radiotherapy Department Through: Systemic Periodic Clinical Audits

Authors: Kamal Kaushik, Trisha, Dandapni, Sambit Nanda, A. Mukherjee, S. Pradhan

Abstract:

INTRODUCTION: As complexity in radiotherapy practice and processes are increasing, there is a need to assure quality control to a greater extent. At present, no international literature available with regards to the optimal quality control indicators for radiotherapy; moreover, few clinical audits have been conducted in the field of radiotherapy. The primary aim is to improve the processes that directly impact clinical outcomes for patients in terms of patient safety and quality of care. PROCEDURE: A team of an Oncologist, a Medical Physicist and a Radiation Therapist was formed for weekly clinical audits of patient’s undergoing radiotherapy audits The stages for audits include Pre planning audits, Simulation, Planning, Daily QA, Implementation and Execution (with image guidance). Errors in all the parts of the chain were evaluated and recorded for the development of further departmental protocols for radiotherapy. EVALUATION: The errors at various stages of radiotherapy chain were evaluated and recorded for comparison before starting the clinical audits in the department of radiotherapy and after starting the audits. It was also evaluated to find the stage in which maximum errors were recorded. The clinical audits were used to structure standard protocols (in the form of checklist) in department of Radiotherapy, which may lead to further reduce the occurrences of clinical errors in the chain of radiotherapy. RESULTS: The aim of this study is to perform a comparison between number of errors in different part of RT chain in two groups (A- Before Audit and B-After Audit). Group A: 94 pts. (48 males,46 female), Total no. of errors in RT chain:19 (9 needed Resimulation) Group B: 94 pts. (61 males,33 females), Total no. of errors in RT chain: 8 (4 needed Resimulation) CONCLUSION: After systematic periodic clinical audits percentage of error in radiotherapy process reduced more than 50% within 2 months. There is a great need in improving quality control in radiotherapy, and the role of clinical audits can only grow. Although clinical audits are time-consuming and complex undertakings, the potential benefits in terms of identifying and rectifying errors in quality control procedures are potentially enormous. Radiotherapy being a chain of various process. There is always a probability of occurrence of error in any part of the chain which may further propagate in the chain till execution of treatment. Structuring departmental protocols and policies helps in reducing, if not completely eradicating occurrence of such incidents.

Keywords: audit, clinical, radiotherapy, improving functionality

Procedia PDF Downloads 84
1515 Saving the Decolonized Subject from Neglected Tropical Diseases: Public Health Campaign and Household-Centred Sanitation in Colonial West Africa, 1900-1960

Authors: Adebisi David Alade

Abstract:

In pre-colonial West Africa, the deadliness of the climate vis-a- vis malaria and other tropical diseases to Europeans turned the region into the “white man’s grave.” Thus, immediately after the partition of Africa in 1885, civilisatrice and mise en valeur not only became a pretext for the establishment of colonial rule; from a medical point of view, the control and possible eradication of disease in the continent emerged as one of the first concerns of the European colonizers. Though geared toward making Africa exploitable, historical evidence suggests that some colonial Water, Sanitation and Hygiene (WASH) policies and projects reduced certain tropical diseases in some West African communities. Exploring some of these disease control interventions by way of historical revisionism, this paper challenges the orthodox interpretation of colonial sanitation and public health measures in West Africa. This paper critiques the deployment of race and class as analytical tools for the study of colonial WASH projects, an exercise which often reduces the complexity and ambiguity of colonialism to the binary of colonizer and the colonized. Since West Africa presently ranks high among regions with Neglected Tropical Diseases (NTDs), it is imperative to decentre colonial racism and economic exploitation in African history in order to give room for Africans to see themselves in other ways. Far from resolving the problem of NTDs by fiat in the region, this study seeks to highlight important blind spots in African colonial history in an attempt to prevent post-colonial African leaders from throwing away the baby with the bath water. As scholars researching colonial sanitation and public health in the continent rarely examine its complex meaning and content, this paper submits that the outright demonization of colonial rule across space and time continues to build ideological wall between the present and the past which not only inhibit fruitful borrowing from colonial administration of West Africa, but also prevents a wide understanding of the challenges of WASH policies and projects in most West African states.

Keywords: colonial rule, disease control, neglected tropical diseases, WASH

Procedia PDF Downloads 183
1514 Novel p22-Monoclonal Antibody Based Blocking ELISA for the Detection of African Swine Fever Virus Antibodies in Serum

Authors: Ghebremedhin Tsegay, Weldu Tesfagaber, Yuanmao Zhu, Xijun He, Wan Wang, Zhenjiang Zhang, Encheng Sun, Jinya Zhang, Yuntao Guan, Fang Li, Renqiang Liu, Zhigao Bu, Dongming Zhao*

Abstract:

African swine fever (ASF) is a highly infectious viral disease of pigs, resulting in significant economic loss worldwide. As there is no approved vaccines and treatments, the control of ASF entirely depends on early diagnosis and culling of infected pigs. Thus, highly specific and sensitive diagnostic assays are required for accurate and early diagnosis of ASF virus (ASFV). Currently, only a few recombinant proteins have been tested and validated for use as reagents in ASF diagnostic assays. The most promising ones for ASFV antibody detection were p72, p30, p54, and pp62. So far, three ELISA kits based on these recombinant proteins have been commercialized. Due to the complex nature of the virus and variety forms of the disease, robust serodiagnostic assays are still required. ASFV p22 protein, encoded by KP177R gene, is located in the inner membrane of viral particle and appeared transiently in the plasma membrane early after virus infection. The p22 protein interacts with numerous cellular proteins, involved in processes of phagocytosis and endocytosis through different cellular pathways. However, p22 does not seem to be involved in virus replication or swine pathogenicity. In this study, E.coli expressed recombinant p22 protein was used to generate a monoclonal antibody (mAb), and its potential use for the development of blocking ELISA (bELISA) was evaluated. A total of 806 pig serum samples were tested to evaluate the bELISA. Acording the ROC (Reciever operating chracteristic) analysis, 100% sensitivity and 98.10% of specificity was recorded when the PI cut-off value was set at 47%. The novel assay was able to detect the antibodies as early as 9 days post infection. Finaly, a highly sensitive, specific and rapid novel p22-mAb based bELISA assay was developed, and optimized for detection of antibodies against genotype I and II ASFVs. It is a promising candidate for an early and acurate detection of the antibodies and is highly expected to have a valuable role in the containment and prevention of ASF.

Keywords: ASFV, blocking ELISA, diagnosis, monoclonal antibodies, sensitivity, specificity

Procedia PDF Downloads 74
1513 Material Supply Mechanisms for Contemporary Assembly Systems

Authors: Rajiv Kumar Srivastava

Abstract:

Manufacturing of complex products such as automobiles and computers requires a very large number of parts and sub-assemblies. The design of mechanisms for delivery of these materials to the point of assembly is an important manufacturing system and supply chain challenge. Different approaches to this problem have been evolved for assembly lines designed to make large volumes of standardized products. However, contemporary assembly systems are required to concurrently produce a variety of products using approaches such as mixed model production, and at times even mass customization. In this paper we examine the material supply approaches for variety production in moderate to large volumes. The conventional approach for material delivery to high volume assembly lines is to supply and stock materials line-side. However for certain materials, especially when the same or similar items are used along the line, it is more convenient to supply materials in kits. Kitting becomes more preferable when lines concurrently produce multiple products in mixed model mode, since space requirements could increase as product/ part variety increases. At times such kits may travel along with the product, while in some situations it may be better to have delivery and station-specific kits rather than product-based kits. Further, in some mass customization situations it may even be better to have a single delivery and assembly station, to which an entire kit is delivered for fitment, rather than a normal assembly line. Finally, in low-moderate volume assembly such as in engineered machinery, it may be logistically more economical to gather materials in an order-specific kit prior to launching final assembly. We have studied material supply mechanisms to support assembly systems as observed in case studies of firms with different combinations of volume and variety/ customization. It is found that the appropriate approach tends to be a hybrid between direct line supply and different kitting modes, with the best mix being a function of the manufacturing and supply chain environment, as well as space and handling considerations. In our continuing work we are studying these scenarios further, through the use of descriptive models and progressing towards prescriptive models to help achieve the optimal approach, capturing the trade-offs between inventory, material handling, space, and efficient line supply.

Keywords: assembly systems, kitting, material supply, variety production

Procedia PDF Downloads 221
1512 Transportation and Urban Land-Use System for the Sustainability of Cities, a Case Study of Muscat

Authors: Bader Eddin Al Asali, N. Srinivasa Reddy

Abstract:

Cities are dynamic in nature and are characterized by concentration of people, infrastructure, services and markets, which offer opportunities for production and consumption. Often growth and development in urban areas is not systematic, and is directed by number of factors like natural growth, land prices, housing availability, job locations-the central business district (CBD’s), transportation routes, distribution of resources, geographical boundaries, administrative policies, etc. One sided spatial and geographical development in cities leads to the unequal spatial distribution of population and jobs, resulting in high transportation activity. City development can be measured by the parameters such as urban size, urban form, urban shape, and urban structure. Urban Size is the city size and defined by the population of the city, and urban form is the location and size of the economic activity (CBD) over the geographical space. Urban shape is the geometrical shape of the city over which the distribution of population and economic activity occupied. And Urban Structure is the transport network within which the population and activity centers are connected by hierarchy of roads. Among the urban land-use systems transportation plays significant role and is one of the largest energy consuming sector. Transportation interaction among the land uses is measured in Passenger-Km and mean trip length, and is often used as a proxy for measurement of energy consumption in transportation sector. Among the trips generated in cities, work trips constitute more than 70 percent. Work trips are originated from the place of residence and destination to the place of employment. To understand the role of urban parameters on transportation interaction, theoretical cities of different size and urban specifications are generated through building block exercise using a specially developed interactive C++ programme and land use transportation modeling is carried. The land-use transportation modeling exercise helps in understanding the role of urban parameters and also to classify the cities for their urban form, structure, and shape. Muscat the capital city of Oman underwent rapid urbanization over the last four decades is taken as a case study for its classification. Also, a pilot survey is carried to capture urban travel characteristics. Analysis of land-use transportation modeling with field data classified Muscat as a linear city with polycentric CBD. Conclusions are drawn suggestion are given for policy making for the sustainability of Muscat City.

Keywords: land-use transportation, transportation modeling urban form, urban structure, urban rule parameters

Procedia PDF Downloads 268
1511 Examining the Changes in Complexity, Accuracy, and Fluency in Japanese L2 Writing Over an Academic Semester

Authors: Robert Long

Abstract:

The results of a one-year study on the evolution of complexity, accuracy, and fluency (CAF) in the compositions of Japanese L2 university students throughout a semester are presented in this study. One goal was to determine if any improvement in writing abilities over this academic term had occurred, while another was to examine methods of editing. Participants had 30 minutes to write each essay with an additional 10 minutes allotted for editing. As for editing, participants were divided into two groups, one of which utilized an online grammar checker, while the other half self-edited their initial manuscripts. From the three different institutions, there was a total of 159 students. Research questions focused on determining if the CAF had evolved over the previous year, identifying potential variations in editing techniques, and describing the connections between the CAF dimensions. According to the findings, there was some improvement in accuracy (fewer errors) in all three of the measures), whereas there was a marked decline in complexity and fluency. As for the second research aim relating to the interaction among the three dimensions (CAF) and of possible increases in fluency being offset by decreases in grammatical accuracy, results showed (there is a logical high correlation with clauses and word counts, and mean length of T-unit (MLT) and (coordinate phrase of T-unit (CP/T) as well as MLT and clause per T-unit (C/T); furthermore, word counts and error/100 ratio correlated highly with error-free clause totals (EFCT). Issues of syntactical complexity had a negative correlation with EFCT, indicating that more syntactical complexity relates to decreased accuracy. Concerning a difference in error correction between those who self-edited and those who used an online grammar correction tool, results indicated that the variable of errors-free clause ratios (EFCR) had the greatest difference regarding accuracy, with fewer errors noted with writers using an online grammar checker. As for possible differences between the first and second (edited) drafts regarding CAF, results indicated there were positive changes in accuracy, the most significant change seen in complexity (CP/T and MLT), while there were relatively insignificant changes in fluency. Results also indicated significant differences among the three institutions, with Fujian University of Technology having the most fluency and accuracy. These findings suggest that to raise students' awareness of their overall writing development, teachers should support them in developing more complex syntactic structures, improving their fluency, and making more effective use of online grammar checkers.

Keywords: complexity, accuracy, fluency, writing

Procedia PDF Downloads 34
1510 Rethinking Confucianism and Democracy

Authors: He Li

Abstract:

Around the mid-1980s, Confucianism was reintroduced into China from Taiwan and Hong Kong as a result of China’s policies of reform and openness. Since then, the revival of neo-Confucianism in mainland China has accelerated and become a crucial component of the public intellectual sphere. The term xinrujia or xinruxue, loosely translated as “neo-Confucianism,” is increasingly understood as an intellectual and cultural phenomenon of the last four decades. The Confucian scholarship is in the process of restoration. This paper examines the Chinese intellectual discourse on Confucianism and democracy and places it in comparative and theoretical perspectives. With China’s rise and surge of populism in the West, particularly in the US, the leading political values of Confucianism could increasingly shape both China and the world at large. This state of affairs points to the need for more systematic efforts to assess the discourse on neo-Confucianism and its implications for China’s transformation. A number of scholars in the camp of neo-Confucianism maintain that some elements of Confucianism are not only compatible with democratic values and institutions but actually promote liberal democracy. They refer to it as Confucian democracy. By contrast, others either view Confucianism as a roadblock to democracy or envision that a convergence of democracy with Confucian values could result in a new hybrid system. The paper traces the complex interplay between Confucianism and democracy. It explores ideological differences between neo-Confucianism and liberal democracy and ascertains whether certain features of neo-Confucianism possess an affinity for the authoritarian political system. In addition to printed materials such as books and journal articles, a selection of articles from the website entitled Confucianism in China will be analyzed. The selection of this website is due to the fact that it is the leading website run by Chinese scholars focusing on neo-Confucianism. Another reason for selecting this website is its accessibility and availability. In the past few years, quite a few websites, left or right, were shut down by the authorities, but this website remains open. This paper explores the core components, dynamics, and implications of neo-Confucianism. My paper is divided into three parts. The first one discusses the origins of neo-Confucianism. The second section reviews the intellectual discourse among Chinese scholars on Confucian democracy. The third one explores the implications of the Chinese intellectual discourse on neo-Confucianism. Recently, liberal democracy has entered more conflict with official ideology. This paper, which is based on my extensive interviews in China prior to the pandemic and analysis of the primary sources in Chinese, will lay the foundation for a chapter on neo-Confucianism and democracy in my next book-length manuscript, tentatively entitled Chinese Intellectual Discourse on Democracy.

Keywords: China, confucius, confucianism, neo-confucianism, democracy

Procedia PDF Downloads 76
1509 The Requirements of Developing a Framework for Successful Adoption of Quality Management Systems in the Construction Industry

Authors: Mohammed Ali Ahmed, Vaughan Coffey, Bo Xia

Abstract:

Quality management systems (QMSs) in the construction industry are often implemented to ensure that sufficient effort is made by companies to achieve the required levels of quality for clients. Attainment of these quality levels can result in greater customer satisfaction, which is fundamental to ensure long-term competitiveness for construction companies. However, the construction sector is still lagging behind other industries in terms of its successful adoption of QMSs, due to the relative lack of acceptance of the benefits of these systems among industry stakeholders, as well as from other barriers related to implementing them. Thus, there is a critical need to undertake a detailed and comprehensive exploration of adoption of QMSs in the construction sector. This paper comprehensively investigates in the construction sector setting, the impacts of all the salient factors surrounding successful implementation of QMSs in building organizations, especially those of external factors. This study is part of an ongoing PhD project, which aims to develop a new framework that integrates both internal and external factors affecting QMS implementation. To achieve the paper aim and objectives, interviews will be conducted to define the external factors influencing the adoption of QMSs, and to obtain holistic critical success factors (CSFs) for implementing these systems. In the next stage of data collection, a questionnaire survey will be developed to investigate the prime barriers facing the adoption of QMSs, the CSFs for their implementation, and the external factors affecting the adoption of these systems. Following the survey, case studies will be undertaken to validate and explain in greater detail the real effects of these factors on QMSs adoption. Specifically, this paper evaluates the effects of the external factors in terms of their impact on implementation success within the selected case studies. Using findings drawn from analyzing the data obtained from these various approaches, specific recommendations for the successful implementation of QMSs will be presented, and an operational framework will be developed. Finally, through a focus group, the findings of the study and the new developed framework will be validated. Ultimately, this framework will be made available to the construction industry to facilitate the greater adoption and implementation of QMSs. In addition, deployment of the applicable recommendations suggested by the study will be shared with the construction industry to more effectively help construction companies to implement QMSs, and overcome the barriers experienced by businesses, thus promoting the achievement of higher levels of quality and customer satisfaction.

Keywords: barriers, critical success factors, external factors, internal factors, quality management systems

Procedia PDF Downloads 183
1508 Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV

Authors: Joyce K. Anastasi, Bernadette Capili

Abstract:

Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV Our research team focuses on developing and testing protocols to manage chronic symptoms. For many years, our team designed and implemented symptom management studies for people living with HIV (PLWH). We identify symptoms that are not curative and are not adequately controlled by conventional therapies. As an exemplar, we describe how we successfully engaged PLWH in developing and refining our research feasibility protocol for distal sensory peripheral neuropathy (DSP) associated with HIV. With input from PLWH with DSP, our research received National Institutes of Health (NIH) research funding support. Significance: DSP is one of the most common neurologic complications in HIV. It is estimated that DSP affects 21% to 50% of PLWH. The pathogenesis of DSP in HIV is complex and unclear. Proposed mechanisms include cytokine dysregulation, viral protein-produced neurotoxicity, and mitochondrial dysfunction associated with antiretroviral medications. There are no FDA-approved treatments for DSP in HIV. Purpose: Aims: 1) to explore the impact of DSP on the lives of PLWH, 2) to identify patients’ perspectives on successful treatments for DSP, 3) to identify interventions considered feasible and sensitive to the needs of PLWH with DSP, and 4) to obtain participant input for protocol/study design. Description of Process: We conducted a needs assessment with PLWH with DSP. From our needs assessment, we learned from the patients’ perspective detailed descriptions of their symptoms; physical functioning with DSP; self-care remedies tried, and desired interventions. We also asked about protocol scheduling, instrument clarity, study compensation, study-related burdens, and willingness to participate in a randomized controlled trial (RCT) with a placebo and a waitlist group. Implications: We incorporated many of the suggestions learned from the need assessment. We developed and completed a feasibility study that provided us with invaluable information that informed subsequent NIH-funded studies. In addition to our extensive clinical and research experience working with PLWH, learning from the patient perspective helped in developing our protocol and promoting a successful plan for recruitment and retention of study participants.

Keywords: clinical trial development, peripheral neuropathy, traditional medicine, HIV, AIDS

Procedia PDF Downloads 82
1507 A Study of Relationship between Leadership Style and Organisational Culture in Private Organisations

Authors: Shreya Sirohi, Vineeta Sirohi

Abstract:

In the 21st century, the nature of work has become quite complex and dynamic, and in response to this, the organizational culture continues to change and develop new perspectives. Organizational culture and leadership are important elements of any organization. Organization’s performance and success to a large extent, depend upon these two factors. The ability of a leader lies in confronting with the challenge of evolving and adapting the culture of the organization as per the situational demands. Leadership and organizational culture are conceptually intertwined. Leadership is a key ingredient for the successful transformation of any organization, and a favorable organizational culture helps to motivate the employees towards their work. Organizational culture and leadership style plays a crucial role in achieving the specified objectives of an organization. The harmony between culture and leader within organization undoubtedly affects relationships, processes, and employee performance. The present investigation aimed to study the Leadership style and Organisational Culture of private organizations and the relationship between the two. The study was carried out on a sample of 100 employees from five private organizations located in the cities of Gurgaon and Delhi in India. The data was collected by employing organisational culture profile and multifactor leadership questionnaire. The findings of the study indicate that the selected organizations had dominant transformation leadership style, whereas the organizational culture varied from one organization to another. However, technocratic culture was found to be prominent, followed by entrepreneurial organizational culture. A low positive correlation was found between leadership style and organizational culture. The transformational leaders have a positive and significant relationship with employee’s satisfaction, productivity, and organization’s culture. The leaders practicing transformational leadership style inspire their followers, are innovative and are aware of their needs as well as of their followers. Such leadership style has a positive impact both on employees and working culture. Employees of such organization are able to come up with innovative ideas and are efficient in handling situations and making effective decisions. However, low correlation is self indicative of the fact that a single leadership style or a single culture type alone cannot contribute solely towards the growth of an organization. There is a need to blend the culture types and leadership styles suiting the needs of the organization. Organisational culture represents the deeper values and beliefs of the employees and influences organizational performance; hence, the leader has a crucial role to play in creating and managing organizational culture in aligning to the requirements of the present era of competitiveness, globalization and technological advancement.

Keywords: leadership style, organizational culture, technocratic, transformational

Procedia PDF Downloads 136
1506 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 146
1505 Screening of Wheat Wild Relatives as a Gene Pool for Improved Photosynthesis in Wheat Breeding

Authors: Amanda J. Burridge, Keith J. Edwards, Paul A. Wilkinson, Tom Batstone, Erik H. Murchie, Lorna McAusland, Ana Elizabete Carmo-Silva, Ivan Jauregui, Tracy Lawson, Silvere R. M. Vialet-Chabrand

Abstract:

The rate of genetic progress in wheat production must be improved to meet global food security targets. However, past selection for domestication traits has reduced the genetic variation in modern wheat cultivars, a fact that could severely limit the future rate of genetic gain. The genetic variation in agronomically important traits for the wild relatives and progenitors of wheat is far greater than that of the current domesticated cultivars, but transferring these traits into modern cultivars is not straightforward. Between the elite cultivars of wheat, photosynthetic capacity is a key trait for which there is limited variation. Early screening of wheat wild relative and progenitors has shown differences in photosynthetic capacity and efficiency not only between wild relative species but marked differences between the accessions of each species. By identifying wild relative accessions with improved photosynthetic traits and characterising the genetic variation responsible, it is possible to incorporate these traits into advanced breeding programmes by wide crossing and introgression programmes. To identify the potential variety of photosynthetic capacity and efficiency available in the secondary and tertiary genepool, a wide scale survey was carried out for over 600 accessions from 80 species including those from the genus Aegilops, Triticum, Thinopyrum, Elymus, and Secale. Genotype data were generated for each accession using a ‘Wheat Wild Relative’ Single Nucleotide Polymorphism (SNP) genotyping array composed of 35,000 SNP markers polymorphic between wild relatives and elite hexaploid wheat. This genotype data was combined with phenotypic measurements such as gas exchange (CO₂, H₂O), chlorophyll fluorescence, growth, morphology, and RuBisCO activity to identify potential breeding material with enhanced photosynthetic capacity and efficiency. The data and associated analysis tools presented here will prove useful to anyone interested in increasing the genetic diversity in hexaploid wheat or the application of complex genotyping data to plant breeding.

Keywords: wheat, wild relatives, pre-breeding, genomics, photosynthesis

Procedia PDF Downloads 220