Search results for: business process model and notation
1348 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability
Procedia PDF Downloads 2091347 Urban Green Transitioning in The Face of Current Global Change: The Management Role of the Local Government and Residents
Authors: Titilope F. Onaolapo, Christiana A. Breed, Maya Pasgaard, Kristine E. Jensen, Peta Brom
Abstract:
In the face of fast-growing urbanization in most of the world's developing countries, there is a need to understand and address the risk and consequences involved in the indiscriminate use of urban green space. Tshwane city in South Africa has the potential to become one of the world's top biodiversity cities as South Africa is ranked one of the mega countries in biodiversity conservation, and Tshwane metropolitan municipality is the city with the wealthiest biodiversity with grassland biomes. In this study, we focus on the potentials and challenges of urban green transitioning from the Global South perspective with Tshwane city as the case study. We also address the issue of management conflicts that have resulted in informal and illegal activities in and around green spaces, with consequences such as land degradation, loss of livelihoods and biodiversity, and socio-ecological imbalances. A desk study review of eight policy frameworks related to green urban planning and development was done based on four GI principles: multifunctionality, connectivity, interdisciplinary and social inclusion. We interviewed 15 key informants in related departments in the city and administered 200 survey questionnaires among residents. We also had several workshops the other researchers and experts on biodiversity and ecosystem. We found out there is no specific document dedicated to green space management, and where green infrastructure was mentioned, it was focused on as an approach to climate mitigation and adaptation. Also, residents perceive green and open spaces as extra land that could be developed at will. We demonstrated the use of collaborative learning approaches in ecological and development research and the tying research to the existing frameworks, programs, and strategies. Based on this understanding. We outlined the need to incorporate principles of green infrastructure in policy frameworks on spatial planning and environmental development. Furthermore, we develop a model for co-management of green infrastructures by stakeholders, such as residents, developers, policymakers, and decision-makers, to maximize benefits. Our collaborative, interdisciplinary projects pursue SDG multifunctionality of goals 11 and 15 by simultaneously addressing issues around Sustainable Cities and Communities, Climate Action, Life on Land, and Strong Institutions, and halt and reverse land degradation and biodiversity.Keywords: governance, green infrastructure, South Africa, sustainable development, urban planning, Tshwane
Procedia PDF Downloads 1261346 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning
Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz
Abstract:
Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics
Procedia PDF Downloads 1201345 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 611344 Defining and Measuring the Success of the Hospitality-Based Social Enterprise Ringelblum Café
Authors: Nitzan Winograd, Nada Kakabadse
Abstract:
This study examines whether the hospitality-based social enterprise Ringelblum Café is achieving its stated social goals of developing a sense of self-efficacy among at-risk youth who work in this enterprise and raising levels of recruitment to the Israel Defence Forces (IDF) and National Service (NS) among these young adults. Ringelblum Café was founded in 2009 in Be'er-Sheva in order to provide employment solutions for at-risk youth in the southern district of Israel. Each year, 10 at-risk young adults aged 16–18 are referred to the programme by various welfare agencies. The training programme is approximately a year in duration and includes professional training in the art of cooking. Each young adult is also supported by a social worker. This study is based on the participation of 31 youths who graduated from the Ringelblum Café’s training programme. A convenience sampling model was used with the assistance of the programme's social worker. This study is quantitative in its approach. Data was collected by means of three separate self-reported questionnaires: a personal information questionnaire collected general demographics data; a self-efficacy questionnaire consisted of two parts: general self-efficacy and social self-efficacy; and an IDS/NS recruitment questionnaire. The study uses the theory of change in order to find out whether at-risk youth in the Ringelblum Café programme are taught a profession with future prospects, as well as whether they develop a sense of self-efficacy and raise their chances of recruitment into the IDF/NS. The study found that the sense of self-efficacy of the graduates is relatively high. In addition, there was a significant difference between the importance of recruitment to the IDF/NS among these youth prior to the beginning of the programme and after its completion, indicating that the training programme had a positive effect on motivation for recruitment to the IDF/NS. The study also found that the percentage of recruits to the IDF/NS among youth who graduated from the training programme were not significantly higher than the general recruitment figures in Israel. In conclusion, Ringelblum Café is making sound progress towards achieving its social goals regarding recruitment to the IDF/NS. Moreover, the sense of self-efficacy among the graduates is relatively high, and it can be assumed that the training programme has a positive effect on these young adults, although there is no clear connection between the two. This study is among a few that have been conducted in the field of hospitality-based social enterprises in Israel and can serve as a basis for further research. Moreover, the study results may help improve the perception of at-risk youth and their contribution to society and could increase awareness of the growing trend of social enterprises promoting social goals.Keywords: at-risk youth, Israel Defence Forces (IDF), national service, recruitment, self-efficacy, social enterprise
Procedia PDF Downloads 2211343 Acoustic Energy Harvesting Using Polyvinylidene Fluoride (PVDF) and PVDF-ZnO Piezoelectric Polymer
Authors: S. M. Giripunje, Mohit Kumar
Abstract:
Acoustic energy that exists in our everyday life and environment have been overlooked as a green energy that can be extracted, generated, and consumed without any significant negative impact to the environment. The harvested energy can be used to enable new technology like wireless sensor networks. Technological developments in the realization of truly autonomous MEMS devices and energy storage systems have made acoustic energy harvesting (AEH) an increasingly viable technology. AEH is the process of converting high and continuous acoustic waves from the environment into electrical energy by using an acoustic transducer or resonator. AEH is not popular as other types of energy harvesting methods since sound waves have lower energy density and such energy can only be harvested in very noisy environment. However, the energy requirements for certain applications are also correspondingly low and also there is a necessity to observe the noise to reduce noise pollution. So the ability to reclaim acoustic energy and store it in a usable electrical form enables a novel means of supplying power to relatively low power devices. A quarter-wavelength straight-tube acoustic resonator as an acoustic energy harvester is introduced with polyvinylidene fluoride (PVDF) and PVDF doped with ZnO nanoparticles, piezoelectric cantilever beams placed inside the resonator. When the resonator is excited by an incident acoustic wave at its first acoustic eigen frequency, an amplified acoustic resonant standing wave is developed inside the resonator. The acoustic pressure gradient of the amplified standing wave then drives the vibration motion of the PVDF piezoelectric beams, generating electricity due to the direct piezoelectric effect. In order to maximize the amount of the harvested energy, each PVDF and PVDF-ZnO piezoelectric beam has been designed to have the same structural eigen frequency as the acoustic eigen frequency of the resonator. With a single PVDF beam placed inside the resonator, the harvested voltage and power become the maximum near the resonator tube open inlet where the largest acoustic pressure gradient vibrates the PVDF beam. As the beam is moved to the resonator tube closed end, the voltage and power gradually decrease due to the decreased acoustic pressure gradient. Multiple piezoelectric beams PVDF and PVDF-ZnO have been placed inside the resonator with two different configurations: the aligned and zigzag configurations. With the zigzag configuration which has the more open path for acoustic air particle motions, the significant increases in the harvested voltage and power have been observed. Due to the interruption of acoustic air particle motion caused by the beams, it is found that placing PVDF beams near the closed tube end is not beneficial. The total output voltage of the piezoelectric beams increases linearly as the incident sound pressure increases. This study therefore reveals that the proposed technique used to harvest sound wave energy has great potential of converting free energy into useful energy.Keywords: acoustic energy, acoustic resonator, energy harvester, eigenfrequency, polyvinylidene fluoride (PVDF)
Procedia PDF Downloads 3891342 Nonlinear Interaction of Free Surface Sloshing of Gaussian Hump with Its Container
Authors: Mohammad R. Jalali
Abstract:
Movement of liquid with a free surface in a container is known as slosh. For instance, slosh occurs when water in a closed tank is set in motion by a free surface displacement, or when liquid natural gas in a container is vibrated by an external driving force, such as an earthquake or movement induced by transport. Slosh is also derived from resonant switching of a natural basin. During sloshing, different types of motion are produced by energy exchange between the liquid and its container. In present study, a numerical model is developed to simulate the nonlinear even harmonic oscillations of free surface sloshing of an initial disturbance to the free surface of a liquid in a closed square basin. The response of the liquid free surface is affected by amplitude and motion frequencies of its container; therefore, sloshing involves complex fluid-structure interactions. In the present study, nonlinear interaction of free surface sloshing of an initial Gaussian hump with its uneven container is predicted numerically. For this purpose, Green-Naghdi (GN) equations are applied as governing equation of fluid field to produce nonlinear second-order and higher-order wave interactions. These equations reduce the dimensions from three to two, yielding equations that can be solved efficiently. The GN approach assumes a particular flow kinematic structure in the vertical direction for shallow and deep-water problems. The fluid velocity profile is finite sum of coefficients depending on space and time multiplied by a weighting function. It should be noted that in GN theory, the flow is rotational. In this study, GN numerical simulations of initial Gaussian hump are compared with Fourier series semi-analytical solutions of the linearized shallow water equations. The comparison reveals that satisfactory agreement exists between the numerical simulation and the analytical solution of the overall free surface sloshing patterns. The resonant free surface motions driven by an initial Gaussian disturbance are obtained by Fast Fourier Transform (FFT) of the free surface elevation time history components. Numerically predicted velocity vectors and magnitude contours for the free surface patterns indicate that interaction of Gaussian hump with its container has localized effect. The result of this sloshing is applicable to the design of stable liquefied oil containers in tankers and offshore platforms.Keywords: fluid-structure interactions, free surface sloshing, Gaussian hump, Green-Naghdi equations, numerical predictions
Procedia PDF Downloads 4011341 Study of the Impact of Quality Management System on Chinese Baby Dairy Product Industries
Authors: Qingxin Chen, Liben Jiang, Andrew Smith, Karim Hadjri
Abstract:
Since 2007, the Chinese food industry has undergone serious food contamination in the baby dairy industry, especially milk powder contamination. One of the milk powder products was found to contain melamine and a significant number (294,000) of babies were affected by kidney stones. Due to growing concerns among consumers about food safety and protection, and high pressure from central government, companies must take radical action to ensure food quality protection through the use of an appropriate quality management system. Previously, though researchers have investigated the health and safety aspects of food industries and products, quality issues concerning food products in China have been largely over-looked. Issues associated with baby dairy products and their quality issues have not been discussed in depth. This paper investigates the impact of quality management systems on the Chinese baby dairy product industry. A literature review was carried out to analyse the use of quality management systems within the Chinese milk power market. Moreover, quality concepts, relevant standards, laws, regulations and special issues (such as Melamine, Flavacin M1 contamination) have been analysed in detail. A qualitative research approach is employed, whereby preliminary analysis was conducted by interview, and data analysis based on interview responses from four selected Chinese baby dairy product companies was carried out. Through the analysis of literature review and data findings, it has been revealed that for quality management system that has been designed by many practitioners, many theories, models, conceptualisation, and systems are present. These standards and procedures should be followed in order to provide quality products to consumers, but the implementation is lacking in the Chinese baby dairy industry. Quality management systems have been applied by the selected companies but the implementation still needs improvement. For instance, the companies have to take measures to improve their processes and procedures with relevant standards. The government need to make more interventions and take a greater supervisory role in the production process. In general, this research presents implications for the regulatory bodies, Chinese Government and dairy food companies. There are food safety laws prevalent in China but they have not been widely practiced by companies. Regulatory bodies must take a greater role in ensuring compliance with laws and regulations. The Chinese government must also play a special role in urging companies to implement relevant quality control processes. The baby dairy companies not only have to accept the interventions from the regulatory bodies and government, they also need to ensure that production, storage, distribution and other processes will follow the relevant rules and standards.Keywords: baby dairy product, food quality, milk powder contamination, quality management system
Procedia PDF Downloads 4731340 A Holistic View of Microbial Community Dynamics during a Toxic Harmful Algal Bloom
Authors: Shi-Bo Feng, Sheng-Jie Zhang, Jin Zhou
Abstract:
The relationship between microbial diversity and algal bloom has received considerable attention for decades. Microbes undoubtedly affect annual bloom events and impact the physiology of both partners, as well as shape ecosystem diversity. However, knowledge about interactions and network correlations among broader-spectrum microbes that lead to the dynamics in a complete bloom cycle are limited. In this study, pyrosequencing and network approaches simultaneously assessed the associate patterns among bacteria, archaea, and microeukaryotes in surface water and sediments in response to a natural dinoflagellate (Alexandrium sp.) bloom. In surface water, among the bacterial community, Gamma-Proteobacteria and Bacteroidetes dominated in the initial bloom stage, while Alpha-Proteobacteria, Cyanobacteria, and Actinobacteria become the most abundant taxa during the post-stage. In the archaea biosphere, it clustered predominantly with Methanogenic members in the early pre-bloom period while the majority of species identified in the later-bloom stage were ammonia-oxidizing archaea and Halobacteriales. In eukaryotes, dinoflagellate (Alexandrium sp.) was dominated in the onset stage, whereas multiply species (such as microzooplankton, diatom, green algae, and rotifera) coexistence in bloom collapse stag. In sediments, the microbial species biomass and richness are much higher than the water body. Only Flavobacteriales and Rhodobacterales showed a slight response to bloom stages. Unlike the bacteria, there are small fluctuations of archaeal and eukaryotic structure in the sediment. The network analyses among the inter-specific associations show that bacteria (Alteromonadaceae, Oceanospirillaceae, Cryomorphaceae, and Piscirickettsiaceae) and some zooplankton (Mediophyceae, Mamiellophyceae, Dictyochophyceae and Trebouxiophyceae) have a stronger impact on the structuring of phytoplankton communities than archaeal effects. The changes in population were also significantly shaped by water temperature and substrate availability (N & P resources). The results suggest that clades are specialized at different time-periods and that the pre-bloom succession was mainly a bottom-up controlled, and late-bloom period was controlled by top-down patterns. Additionally, phytoplankton and prokaryotic communities correlated better with each other, which indicate interactions among microorganisms are critical in controlling plankton dynamics and fates. Our results supplied a wider view (temporal and spatial scales) to understand the microbial ecological responses and their network association during algal blooming. It gives us a potential multidisciplinary explanation for algal-microbe interaction and helps us beyond the traditional view linked to patterns of algal bloom initiation, development, decline, and biogeochemistry.Keywords: microbial community, harmful algal bloom, ecological process, network
Procedia PDF Downloads 1171339 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor
Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro
Abstract:
Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.Keywords: control, DC motor, discrete PID, discrete state feedback
Procedia PDF Downloads 2681338 A Post-Colonial Reading of Maria Edgeworth's Anglo-Irish Novels: Castle Rackrent and the Absentee
Authors: Al. Harshan, Hazamah Ali Mahdi
Abstract:
The Big House literature embodies Irish history. It requires a special dimension of moral and social significance in relation to its owners. The Big House is a metaphor for the decline of the protestant Ascendancy that ruled in a catholic country and oppressed a native people. In the tradition of the Big House fiction, Maria Edgeworth's Castle Rackrent and the Absentee explore the effect of the Anglo-Irish protestant Ascendancy as it governed and misgoverned Ireland. Edgeworth illustrates the tradition of the Big House as a symbol of both a personal and historical theme. This paper provides a reading of Castle Rackrent and The Absentee from a post-colonial perspective. The paper maintains that Edgeworth's novel contain elements of a radical critique of the colonialist enterprise. In our postcolonial reading of Maria Edgeworth's novels, one that goes beyond considering works as those of Sir Walter Scoot, regional evidence has been found of Edgeworth's colonial ideology. The significance of Castle Rackrent lies mainly in the fact that is the first English novel to speak in the voice of the colonized Irish. What is more important is that the irony and the comic aspect of the novel comes from its Irish narrator (Thady Quirk) and its Irish setting Ireland. Edgeworth reveals the geographical 'other' to her English reader, by placing her colonized Irish narrator and his son, Jason Quirk, in a position of inferiority to emphasize the gap between Englishness and Irishness. Furthermore, this satirical aspect is a political one. It works to create and protect the superiority of the domestic English reader over the Irish subject. In other words, the implication of the colonial system of the novel and of its structure of dominance and subordination is overlooked by its comic dimension. The matrimonial plot in the Absentee functions as an imperial plot, constructing Ireland as a complementary but ever unequal partner in the family of Great Britain. This imperial marriage works hegemonically to produce the domestic stability considered so crucial to national and colonial stability. Moreover, in order to achieve her proper imperial plot, Edgeworth reconciliation of England and Ireland is seen in the marriage of the Anglo-Irish (hero/Colambre) with the Irish (heroine/Grace Nugent), and the happy bourgeois family; consequently, it becomes the model for colonizer-colonized relationships. Edgeworth must establish modes of legitimate behavior for women and men. The Absentee explains more purposely how familial reorganization is dependent on the restitution of masculine authority and advantage, particularly for Irish community.Keywords: Maria Edgeworth, post-colonial, reading, Irish
Procedia PDF Downloads 5471337 Risk Assessment of Lead Element in Red Peppers Collected from Marketplaces in Antalya, Southern Turkey
Authors: Serpil Kilic, Ihsan Burak Cam, Murat Kilic, Timur Tongur
Abstract:
Interest in the lead (Pb) has considerably increased due to knowledge about the potential toxic effects of this element, recently. Exposure to heavy metals above the acceptable limit affects human health. Indeed, Pb is accumulated through food chains up to toxic concentrations; therefore, it can pose an adverse potential threat to human health. A sensitive and reliable method for determination of Pb element in red pepper were improved in the present study. Samples (33 red pepper products having different brands) were purchased from different markets in Turkey. The selected method validation criteria (linearity, Limit of Detection, Limit of Quantification, recovery, and trueness) demonstrated. Recovery values close to 100% showed adequate precision and accuracy for analysis. According to the results of red pepper analysis, all of the tested lead element in the samples was determined at various concentrations. A Perkin- Elmer ELAN DRC-e model ICP-MS system was used for detection of Pb. Organic red pepper was used to obtain a matrix for all method validation studies. The certified reference material, Fapas chili powder, was digested and analyzed, together with the different sample batches. Three replicates from each sample were digested and analyzed. The results of the exposure levels of the elements were discussed considering the scientific opinions of the European Food Safety Authority (EFSA), which is the European Union’s (EU) risk assessment source associated with food safety. The Target Hazard Quotient (THQ) was described by the United States Environmental Protection Agency (USEPA) for the calculation of potential health risks associated with long-term exposure to chemical pollutants. THQ value contains intake of elements, exposure frequency and duration, body weight and the oral reference dose (RfD). If the THQ value is lower than one, it means that the exposed population is assumed to be safe and 1 < THQ < 5 means that the exposed population is in a level of concern interval. In this study, the THQ of Pb was obtained as < 1. The results of THQ calculations showed that the values were below one for all the tested, meaning the samples did not pose a health risk to the local population. This work was supported by The Scientific Research Projects Coordination Unit of Akdeniz University. Project Number: FBA-2017-2494.Keywords: lead analyses, red pepper, risk assessment, daily exposure
Procedia PDF Downloads 1711336 Cognitive Control Moderates the Concurrent Effect of Autistic and Schizotypal Traits on Divergent Thinking
Authors: Julie Ramain, Christine Mohr, Ahmad Abu-Akel
Abstract:
Divergent thinking—a cognitive component of creativity—and particularly the ability to generate unique and novel ideas, has been linked to both autistic and schizotypal traits. However, to our knowledge, the concurrent effect of these trait dimensions on divergent thinking has not been investigated. Moreover, it has been suggested that creativity is associated with different types of attention and cognitive control, and consequently how information is processed in a given context. Intriguingly, consistent with the diametric model, autistic and schizotypal traits have been associated with contrasting attentional and cognitive control styles. Positive schizotypal traits have been associated with reactive cognitive control and attentional flexibility, while autistic traits have been associated with proactive cognitive control and the increased focus of attention. The current study investigated the relationship between divergent thinking, autistic and schizotypal traits and cognitive control in a non-clinical sample of 83 individuals (Males = 42%; Mean age = 22.37, SD = 2.93), sufficient to detect a medium effect size. Divergent thinking was evaluated in an adapted version of-of the Figural Torrance Test of Creative Thinking. Crucially, since we were interested in testing divergent thinking productivity across contexts, participants were asked to generate items from basic shapes in four different contexts. The variance of the proportion of unique to total responses across contexts represented a measure of context adaptability, with lower variance indicating increased context adaptability. Cognitive control was estimated with the Behavioral Proactive Index of the AX-CPT task, with higher scores representing the ability to actively maintain goal-relevant information in a sustained/anticipatory manner. Autistic and schizotypal traits were assessed with the Autism Quotient (AQ) and the Community Assessment of Psychic Experiences (CAPE-42). Generalized linear models revealed a 3-way interaction of autistic and positive schizotypal traits, and proactive cognitive control, associated with increased context adaptability. Specifically, the concurrent effect of autistic and positive schizotypal traits on increased context adaptability was moderated by the level of proactive control and was only significant when proactive cognitive control was high. Our study reveals that autistic and positive schizotypal traits interactively facilitate the capacity to generate unique ideas across various contexts. However, this effect depends on cognitive control mechanisms indicative of the ability to proactively maintain attention when needed. The current results point to a unique profile of divergent thinkers who have the ability to respectively tap both systematic and flexible processing modes within and across contexts. This is particularly intriguing as such combination of phenotypes has been proposed to explain the genius of Beethoven, Nash, and Newton.Keywords: autism, schizotypy, creativity, cognitive control
Procedia PDF Downloads 1401335 Godalisation: A Revisionist Conceptual Framework for Singapore’s Artistic Identity
Authors: Bernard Tan
Abstract:
The paper presents a conceptual framework which serves as an art model of Singapore artistic identity. Specifically, the study examines Singapore's artistic identity through the artworks of the country’s significant artists covering the period 1950s to the present. Literature review will discuss the challenges of favouring or choosing one artist over the other. Methodology provides an overview of the perspectives of local artists and surveys Singapore’s artistic histories through qualitative interviews and case studies. Analysis from qualitative data reveals that producing works of accrued visual significance for the country which captures it zeitgeist further strengthens artist’s artistic identity, and consequently, their works remembered by future generations. The paper presents a conceptual framework for Singapore’s artistic identity by categorising it into distinctive categories or Periods: Colonial Period (pre-1965); Nation Building Period (1965-1988); Globalisation Period (1989-2000); Paternal Production Period (2001-2015); and A New Era (2015-present). Godalisation, coined from God and Globalisation – by artist and art collector, Teng Jee Hum – is a direct reference to the godlike influence on Singapore by its founding Father, Mr Lee Kuan Yew, the country’s first Prime Minister who steered the city state “from Third World to First” for close to half a century, from 1965 to his passing in 2015. A detailed schema showing important factors in different art categories: key global geopolitics, key local social-politics, and significant events will be analysed in depth. Main artist groups or artist initiatives which evolved in Singapore during the different Periods from pre-1965 to the present will be categorized and discussed. Taken as a whole, all these periods collectively add up to the Godalisation Era; impacted by the social-political events and historical period of the nation, and captured through the visual representation of the country’s significant artists in their attempt at either visualizing or mythologizing the Singapore Story. The author posits a co-relation between a nation’s economic success and the value or price appreciation of the country’s artist of significance artworks. The paper posed a rhetorical question: “Which Singapore’s artist will historian of the future – and by extension, the people of the country from future generations – remember? Who will remain popular? Whilst which artists will be forgotten.” The searching question: “Who will survive, be remembered in the annals of history and, above all, how to ensure the survival of one’s nation artistic identity? The art that last will probably be determined by the future, in the future, where art historians pontificate from a later vantage point.Keywords: artistic identity, art collection, godalisation, singapore
Procedia PDF Downloads 411334 Hospice-Shared Care for a Child Patient Supported with Extracorporeal Membrane Oxygenation
Authors: Hsiao-Lin Fang
Abstract:
Every life is precious, and comprehensive care should be provided to individuals who are in the final stages of their lives. Hospice-shared care aims to provide optimal symptom control and palliative care to terminal (cancer) patients through the implementation of shared care, and to support patients and their families in making various physical and psychological adjustments in the face of death. This report examines a 10-year-boy diagnosed with Out-of-Hospital Cardiac Arrest (OHCA). The individual fainted when swimming at school and underwent 31 minutes of cardiopulmonary resuscitation (CPR). While receiving treatment at the hospital, the individual received extracorporeal membrane oxygenation(ECMO) due to unstable hemodynamics. Urgent cardiac catheterization found: Suspect acute fulminant myocarditis or underlying cardiomyopathy with acute decompensation, After the active rescue by the medical team, hemodynamics still showed only mean pressure value. With respect to the patient, interdepartmental hospice-shared care was implemented and a do-not-resuscitate (DNR) order was signed after family discussions were conducted. Assistance and instructions were provided as part of the comfort care process. A farewell gathering attended by the patient’s relatives, friends, teachers, and classmates was organized in an intensive care unit (ICU) in order to look back on the patient’s life and the beautiful memories that were created, as well as to alleviate the sorrow felt by family members, including the patient’s father and sister. For example, the patient was presented with drawings and accompanied to a garden to pick flowers. In this manner, the patient was able to say goodbye before death. Finally, the patient’s grandmother and father participated in the clinical hospice care and post-mortem care processes. A hospice-shared care clinician conducted regular follow-ups and provided care to the family of the deceased, supporting family members through the sorrowful period. Birth, old age, sickness, and death are the natural phases of human life. In recent years, growing attention has been paid to human-centered hospice care. Hospice care is individual holistic care provided by a professional team and it involves the provision of comprehensive care to a terminal patient. Hospice care aims to satisfy the physical, psychological, mental, and social needs of patients and their families. It does not involve the cessation of treatment but rather avoids the exacerbation or extension of the suffering endured by patients, thereby preserving the dignity and quality of life during the end-of-life period. Patients enjoy the company of others as they complete the last phase of their lives, and their families also receive guidance on how they can move on with their own lives after the patient’s death.Keywords: hospice-shared care, extracorporeal membrane oxygenation (ECMO), hospice-shared care, child patient
Procedia PDF Downloads 1411333 Additive Manufacturing of Microstructured Optical Waveguides Using Two-Photon Polymerization
Authors: Leonnel Mhuka
Abstract:
Background: The field of photonics has witnessed substantial growth, with an increasing demand for miniaturized and high-performance optical components. Microstructured optical waveguides have gained significant attention due to their ability to confine and manipulate light at the subwavelength scale. Conventional fabrication methods, however, face limitations in achieving intricate and customizable waveguide structures. Two-photon polymerization (TPP) emerges as a promising additive manufacturing technique, enabling the fabrication of complex 3D microstructures with submicron resolution. Objectives: This experiment aimed to utilize two-photon polymerization to fabricate microstructured optical waveguides with precise control over geometry and dimensions. The objective was to demonstrate the feasibility of TPP as an additive manufacturing method for producing functional waveguide devices with enhanced performance. Methods: A femtosecond laser system operating at a wavelength of 800 nm was employed for two-photon polymerization. A custom-designed CAD model of the microstructured waveguide was converted into G-code, which guided the laser focus through a photosensitive polymer material. The waveguide structures were fabricated using a layer-by-layer approach, with each layer formed by localized polymerization induced by non-linear absorption of the laser light. Characterization of the fabricated waveguides included optical microscopy, scanning electron microscopy, and optical transmission measurements. The optical properties, such as mode confinement and propagation losses, were evaluated to assess the performance of the additive manufactured waveguides. Conclusion: The experiment successfully demonstrated the additive manufacturing of microstructured optical waveguides using two-photon polymerization. Optical microscopy and scanning electron microscopy revealed the intricate 3D structures with submicron resolution. The measured optical transmission indicated efficient light propagation through the fabricated waveguides. The waveguides exhibited well-defined mode confinement and relatively low propagation losses, showcasing the potential of TPP-based additive manufacturing for photonics applications. The experiment highlighted the advantages of TPP in achieving high-resolution, customized, and functional microstructured optical waveguides. Conclusion: his experiment substantiates the viability of two-photon polymerization as an innovative additive manufacturing technique for producing complex microstructured optical waveguides. The successful fabrication and characterization of these waveguides open doors to further advancements in the field of photonics, enabling the development of high-performance integrated optical devices for various applicationsKeywords: Additive Manufacturing, Microstructured Optical Waveguides, Two-Photon Polymerization, Photonics Applications
Procedia PDF Downloads 1041332 Determining the Thermal Performance and Comfort Indices of a Naturally Ventilated Room with Reduced Density Reinforced Concrete Wall Construction over Conventional M-25 Grade Concrete
Authors: P. Crosby, Shiva Krishna Pavuluri, S. Rajkumar
Abstract:
Purpose: Occupied built-up space can be broadly classified as air-conditioned and naturally ventilated. Regardless of the building type, the objective of all occupied built-up space is to provide a thermally acceptable environment for human occupancy. Considering this aspect, air-conditioned spaces allow a greater degree of flexibility to control and modulate the comfort parameters during the operation phase. However, in the case of naturally ventilated space, a number of design features favoring indoor thermal comfort should be mandatorily conceptualized starting from the design phase. One such primary design feature that requires to be prioritized is, selection of building envelope material, as it decides the flow of energy from outside environment to occupied spaces. Research Methodology: In India and many countries across globe, the standardized material used for building envelope is re-enforced concrete (i.e. M-25 grade concrete). The comfort inside the RC built environment for warm & humid climate (i.e. mid-day temp of 30-35˚C, diurnal variation of 5-8˚C & RH of 70-90%) is unsatisfying to say the least. This study is mainly focused on reviewing the impact of mix design of conventional M25 grade concrete on inside thermal comfort. In this mix design, air entrainment in the range of 2000 to 2100 kg/m3 is introduced to reduce the density of M-25 grade concrete. Thermal performance parameters & indoor comfort indices are analyzed for the proposed mix and compared in relation to the conventional M-25 grade. There are diverse methodologies which govern indoor comfort calculation. In this study, three varied approaches specifically a) Indian Adaptive Thermal comfort model, b) Tropical Summer Index (TSI) c) Air temperature less than 33˚C & RH less than 70% to calculate comfort is adopted. The data required for the thermal comfort study is acquired by field measurement approach (i.e. for the new mix design) and simulation approach by using design builder (i.e. for the conventional concrete grade). Findings: The analysis points that the Tropical Summer Index has a higher degree of stringency in determining the occupant comfort band whereas also providing a leverage in thermally tolerable band over & above other methodologies in the context of the study. Another important finding is the new mix design ensures a 10% reduction in indoor air temperature (IAT) over the outdoor dry bulb temperature (ODBT) during the day. This translates to a significant temperature difference of 6 ˚C IAT and ODBT.Keywords: Indian adaptive thermal comfort, indoor air temperature, thermal comfort, tropical summer index
Procedia PDF Downloads 3251331 Study of the Kinetics of Formation of Carboxylic Acids Using Ion Chromatography during Oxidation Induced by Rancimat of the Oleic Acid, Linoleic Acid, Linolenic Acid, and Biodiesel
Authors: Patrícia T. Souza, Marina Ansolin, Eduardo A. C. Batista, Antonio J. A. Meirelles, Matthieu Tubino
Abstract:
Lipid oxidation is a major cause of the deterioration of the quality of the biodiesel, because the waste generated damages the engines. Among the main undesirable effects are the increase of viscosity and acidity, leading to the formation of insoluble gums and sediments which cause the blockage of fuel filters. The auto-oxidation is defined as the spontaneous reaction of atmospheric oxygen with lipids. Unsaturated fatty acids are usually the components affected by such reactions. They are present as free fatty acids, fatty esters and glycerides. To determine the oxidative stability of biodiesels, through the induction period, IP, the Rancimat method is used, which allows continuous monitoring of the induced oxidation process of the samples. During the oxidation of the lipids, volatile organic acids are produced as byproducts, in addition, other byproducts, including alcohols and carbonyl compounds, may be further oxidized to carboxylic acids. By the methodology developed in this work using ion chromatography, IC, analyzing the water contained in the conductimetric vessel, were quantified organic anions of carboxylic acids in samples subjected to oxidation induced by Rancimat. The optimized chromatographic conditions were: eluent water:acetone (80:20 v/v) with 0.5 mM sulfuric acid; flow rate 0.4 mL min-1; injection volume 20 µL; eluent suppressor 20 mM LiCl; analytical curve from 1 to 400 ppm. The samples studied were methyl biodiesel from soybean oil and unsaturated fatty acids standards: oleic, linoleic and linolenic. The induced oxidation kinetics curves were constructed by analyzing the water contained in the conductimetric vessels which were removed, each one, from the Rancimat apparatus at prefixed intervals of time. About 3 g of sample were used under the conditions of 110 °C and air flow rate of 10 L h-1. The water of each conductimetric Rancimat measuring vessel, where the volatile compounds were collected, was filtered through a 0.45 µm filter and analyzed by IC. Through the kinetic data of the formation of the organic anions of carboxylic acids, the formation rates of the same were calculated. The observed order of the rates of formation of the anions was: formate >>> acetate > hexanoate > valerate for the oleic acid; formate > hexanoate > acetate > valerate for the linoleic acid; formate >>> valerate > acetate > propionate > butyrate for the linolenic acid. It is possible to suppose that propionate and butyrate are obtained mainly from linolenic acid and that hexanoate is originated from oleic and linoleic acid. For the methyl biodiesel the order of formation of anions was: formate >>> acetate > valerate > hexanoate > propionate. According to the total rate of formation these anions produced during the induced degradation of the fatty acids can be assigned the order of reactivity: linolenic acid > linoleic acid >>> oleic acid.Keywords: anions of carboxylic acids, biodiesel, ion chromatography, oxidation
Procedia PDF Downloads 4761330 3D Nanostructured Assembly of 2D Transition Metal Chalcogenide/Graphene as High Performance Electrocatalysts
Authors: Sunil P. Lonkar, Vishnu V. Pillai, Saeed Alhassan
Abstract:
Design and development of highly efficient, inexpensive, and long-term stable earth-abundant electrocatalysts hold tremendous promise for hydrogen evolution reaction (HER) in water electrolysis. The 2D transition metal dichalcogenides, especially molybdenum disulfide attracted a great deal of interests due to its high electrocatalytic activity. However, due to its poor electrical conductivity and limited exposed active sites, the performance of these catalysts is limited. In this context, a facile and scalable synthesis method for fabrication nanostructured electrocatalysts composed 3D graphene porous aerogels supported with MoS₂ and WS₂ is highly desired. Here we developed a highly active and stable electrocatalyst catalyst for the HER by growing it into a 3D porous architecture on conducting graphene. The resulting nanohybrids were thoroughly investigated by means of several characterization techniques to understand structure and properties. Moreover, the HER performance of these 3D catalysts is expected to greatly improve in compared to other, well-known catalysts which mainly benefits from the improved electrical conductivity of the by graphene and porous structures of the support. This technologically scalable process can afford efficient electrocatalysts for hydrogen evolution reactions (HER) and hydrodesulfurization catalysts for sulfur-rich petroleum fuels. Owing to the lower cost and higher performance, the resulting materials holds high potential for various energy and catalysis applications. In typical hydrothermal method, sonicated GO aqueous dispersion (5 mg mL⁻¹) was mixed with ammonium tetrathiomolybdate (ATTM) and tungsten molybdate was treated in a sealed Teflon autoclave at 200 ◦C for 4h. After cooling, a black solid macroporous hydrogel was recovered washed under running de-ionized water to remove any by products and metal ions. The obtained hydrogels were then freeze-dried for 24 h and was further subjected to thermal annealing driven crystallization at 600 ◦C for 2h to ensure complete thermal reduction of RGO into graphene and formation of highly crystalline MoS₂ and WoS₂ phases. The resulting 3D nanohybrids were characterized to understand the structure and properties. The SEM-EDS clearly reveals the formation of highly porous material with a uniform distribution of MoS₂ and WS₂ phases. In conclusion, a novice strategy for fabrication of 3D nanostructured MoS₂-WS₂/graphene is presented. The characterizations revealed that the in-situ formed promoters uniformly dispersed on to few layered MoS₂¬-WS₂ nanosheets that are well-supported on graphene surface. The resulting 3D hybrids hold high promise as potential electrocatalyst and hydrodesulfurization catalyst.Keywords: electrocatalysts, graphene, transition metal chalcogenide, 3D assembly
Procedia PDF Downloads 1401329 Robust Processing of Antenna Array Signals under Local Scattering Environments
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch
Procedia PDF Downloads 1181328 The Administration of Infection Diseases During the Pandemic COVID-19 and the Role of the Differential Diagnosis with Biomarkers VB10
Authors: Sofia Papadimitriou
Abstract:
INTRODUCTION: The differential diagnosis between acute viral and bacterial infections is an important cost-effectiveness parameter at the stage of the treatment process in order to achieve the maximum benefits in therapeutic intervention by combining the minimum cost to ensure the proper use of antibiotics.The discovery of sensitive and robust molecular diagnostic tests in response to the role of the host in infections has enhanced the accurate diagnosis and differentiation of infections. METHOD: The study used a sample of six independent blood samples (total=756) which are associated with human proteins-proteins, each of which at the transcription stage expresses a different response in the host network between viral and bacterial infections.Τhe individual blood samples are subjected to a sequence of computer filters that identify a gene panel corresponding to an autonomous diagnostic score. The data set and the correspondence of the gene panel to the diagnostic patents a new Bangalore -Viral Bacterial (BL-VB). FINDING: We use a biomarker based on the blood of 10 genes(Panel-VB) that are an important prognostic value for the detection of viruses from bacterial infections with a weighted average AUROC of 0.97(95% CL:0.96-0.99) in eleven independent samples (sets n=898). We discovered a base with a patient score (VB 10 ) according to the table, which is a significant diagnostic value with a weighted average of AUROC 0.94(95% CL: 0.91-0.98) in 2996 patient samples from 56 public sets of data from 19 different countries. We also studied VB 10 in a new cohort of South India (BL-VB,n=56) and found 97% accuracy in confirmed cases of viral and bacterial infections. We found that VB 10 (a)accurately identifies the type of infection even in unspecified cases negative to the culture (b) shows its clinical condition recovery and (c) applies to all age groups, covering a wide range of acute bacterial and viral infectious, including non-specific pathogens. We applied our VB 10 rating to publicly available COVID 19 data and found that our rating diagnosed viral infection in patient samples. RESULTS: Τhe results of the study showed the diagnostic power of the biomarker VB 10 as a diagnostic test for the accurate diagnosis of acute infections in recovery conditions. We look forward to helping you make clinical decisions about prescribing antibiotics and integrating them into your policies management of antibiotic stewardship efforts. CONCLUSIONS: Overall, we are developing a new property of the RNA-based biomarker and a new blood test to differentiate between viral and bacterial infections to assist a physician in designing the optimal treatment regimen to contribute to the proper use of antibiotics and reduce the burden on antimicrobial resistance, AMR.Keywords: acute infections, antimicrobial resistance, biomarker, blood transcriptome, systems biology, classifier diagnostic score
Procedia PDF Downloads 1571327 The Efficacy of Preoperative Thermal Pulsation Treatment in Reducing Post Cataract Surgery Dry Eye Disease: A Systematic Review and Meta-analysis
Authors: Lugean K. Alomari, Rahaf K. Sharif, Basil K. Alomari, Hind M. Aljabri, Faisal F. Aljahdali, Amal A. Alomari, Saeed A. Alghamdi
Abstract:
Background: The thermal pulsation system is a therapy that uses heat and massage to treat dry eye disease; thus, some trials have been published to compare it with the conventional treatment. The aim of this study is to conduct a systematic review and meta-analysis comparing the efficacy of thermal pulsation systems with conventional treatment in patients undergoing cataract surgery. Methods: Medline, Embase, and Cochrane Central Register of Controlled Trials (CENTRAL) databases were searched for eligible trials. We included three randomized controlled trials (RCTs) that compared the thermal pulsation system with the conventional treatment in patients undergoing cataract surgery. A table of characteristics was plotted, and the Quality of the studies was assessed using the Cochrane risk-of-bias tool for randomized trials (RoB 2). Forest plots were plotted using the Random-effect Inverse Variance method. χ2 test and the Higgins-I-squared (I2) model were used to assess heterogeneity. A total of 201 cataract surgery patients were included, with 105 undergoing preoperative pulsation therapy and 96 receiving conventional treatment. Demographic analysis revealed comparable distributions across groups. Results: All the studies in our analysis are of good quality with a low risk of bias. A total of 201 patients were included in the analysis, out of which 105 underwent pulsation therapy, and 95 were in the control group. Tear Break-up Time (TBUT) analysis revealed no significant baseline differences, except pulsation therapy being better at 1 month. (SMD 0.42 [95%CI 0.14 - 0.70] p=0.004). This positive trend continued at three months (SMD 0.52 [95% CI (0.20 – 0.84)] p=0.002). Corneal fluorescein staining scores and Meibomian gland-yielding secretion scores showed no significant differences at baseline. However, at one month, pulsation therapy significantly improved Meibomian gland function (SMD -0.86 [95% CI (-1.20 - -0.53)] p<0.00001), indicating a reduced risk of dry eye syndrome. Conclusion: Preoperative pulsation therapy appears to enhance post-cataract surgery outcomes, particularly in terms of tear film stability and Meibomian gland secretory function. The sustained positive effects observed at one and three months post-surgery suggest the potential for long-term benefits.Keywords: lipiflow, cataract, thermal pulsation, dry eye
Procedia PDF Downloads 251326 Creation and Evaluation of an Academic Blog of Tools for the Self-Correction of Written Production in English
Authors: Brady, Imelda Katherine, Da Cunha Fanego, Iria
Abstract:
Today's university students are considered digital natives and the use of Information Technologies (ITs) forms a large part of their study and learning. In the context of language studies, applications that help with revisions of grammar or vocabulary are particularly useful, especially if they are open access. There are studies that show the effectiveness of this type of application in the learning of English as a foreign language and that using IT can help learners become more autonomous in foreign language acquisition, given that these applications can enhance awareness of the learning process; this means that learners are less dependent on the teacher for corrective feedback. We also propose that the exploitation of these technologies also enhances the work of the language instructor wishing to incorporate IT into his/her practice. In this context, the aim of this paper is to present the creation of a repository of tools that provide support in the writing and correction of texts in English and the assessment of their usefulness on behalf of university students enrolled in the English Studies Degree. The project seeks to encourage the development of autonomous learning through the acquisition of skills linked to the self-correction of written work in English. To comply with the above, our methodology follows five phases. First of all, a selection of the main open-access online applications available for the correction of written texts in English is made: AutoCrit, Hemingway, Grammarly, LanguageTool, OutWrite, PaperRater, ProWritingAid, Reverso, Slick Write, Spell Check Plus and Virtual Writing Tutor. Secondly, the functionalities of each of these tools (spelling, grammar, style correction, etc.) are analyzed. Thirdly, explanatory materials (texts and video tutorials) are prepared on each tool. Fourth, these materials are uploaded into a repository of our university in the form of an institutional blog, which is made available to students and the general public. Finally, a survey was designed to collect students’ feedback. The survey aimed to analyse the usefulness of the blog and the quality of the explanatory materials as well as the degree of usefulness that students assigned to each of the tools offered. In this paper, we present the results of the analysis of data received from 33 students in the 1st semester of the 21-22 academic year. One result we highlight in our paper is that the students have rated this resource very highly, in addition to offering very valuable information on the perceived usefulness of the applications provided for them to review. Our work, carried out within the framework of a teaching innovation project funded by our university, emphasizes that teachers need to design methodological strategies that help their students improve the quality of their productions written in English and, by extension, to improve their linguistic competence.Keywords: academic blog, open access tools, online self-correction, written production in English, university learning
Procedia PDF Downloads 1031325 Brand Positioning in Iran: A Case Study of the Professional Soccer League
Authors: Homeira Asadi Kavan, Seyed Nasrollah Sajjadi, Mehrzade Hamidi, Hossein Rajabi, Mahdi Bigdely
Abstract:
Positioning strategies of a sports brand can create a unique impression in the minds of the fans, sponsors, and other stakeholders. In order to influence potential customer's perception in an effective and positive way, a brands positioning strategy must be unique, credible, and relevant. Many sports clubs in Iran have been struggling to implement and achieve brand positioning accomplishments, due to different reasons such as lack of experience, scarcity of experts in the sports branding, and lack of related researches in this field. This study will provide a comprehensive theoretical framework and action plan for sport managers and marketers to design and implement effective brand positioning and to enable them to be distinguishable from competing brands and sports clubs. The study instrument is interviews with sports marketing and brand experts who have been working in this industry for a minimum of 20 years. Qualitative data analysis was performed using Atlast.ti text mining software version 7 and Open, axial and selective coding were employed to uncover and systematically analyze important and complex phenomena and elements. The findings show 199 effective elements in positioning strategies in Iran Professional Soccer League. These elements are categorized into 23 concepts and sub-categories as follows: Structural prerequisites, Strategic management prerequisites, Commercial prerequisites, Major external prerequisites, Brand personality, Club symbols, Emotional aspects, Event aspects, Fans’ strategies, Marketing information strategies, Marketing management strategies, Empowerment strategies, Executive management strategies, League context, Fans’ background, Market context, Club’s organizational context, Support context, Major contexts, Political-Legal elements, Economic factors, Social factors, and Technological factors. Eventually, the study model was developed by 6 main dimensions of Causal prerequisites, Axial Phenomenon (brand position), Strategies, Context Factors, Interfering Factors, and Consequences. Based on the findings, practical recommendations and strategies are suggested that can help club managers and marketers in developing and improving their respective sport clubs, brand positioning, and activities.Keywords: brand positioning, soccer club, sport marketing, Iran professional soccer league, brand strategy
Procedia PDF Downloads 1381324 Educational Debriefing in Prehospital Medicine: A Qualitative Study Exploring Educational Debrief Facilitation and the Effects of Debriefing
Authors: Maria Ahmad, Michael Page, Danë Goodsman
Abstract:
‘Educational’ debriefing – a construct distinct from clinical debriefing – is used following simulated scenarios and is central to learning and development in fields ranging from aviation to emergency medicine. However, little research into educational debriefing in prehospital medicine exists. This qualitative study explored the facilitation and effects of prehospital educational debriefing and identified obstacles to debriefing, using the London’s Air Ambulance Pre-Hospital Care Course (PHCC) as a model. Method: Ethnographic observations of moulages and debriefs were conducted over two consecutive days of the PHCC in October 2019. Detailed contemporaneous field notes were made and analysed thematically. Subsequently, seven one-to-one, semi-structured interviews were conducted with four PHCC debrief facilitators and three course participants to explore their experiences of prehospital educational debriefing. Interview data were manually transcribed and analysed thematically. Results: Four overarching themes were identified: the approach to the facilitation of debriefs, effects of debriefing, facilitator development, and obstacles to debriefing. The unpredictable debriefing environment was seen as both hindering and paradoxically benefitting educational debriefing. Despite using varied debriefing structures, facilitators emphasised similar key debriefing components, including exploring participants’ reasoning and sharing experiences to improve learning and prevent future errors. Debriefing was associated with three principal effects: releasing emotion; learning and improving, particularly participant compound learning as they progressed through scenarios; and the application of learning to clinical practice. Facilitator training and feedback were central to facilitator learning and development. Several obstacles to debriefing were identified, including mismatch of participant and facilitator agendas, performance pressure, and time. Interestingly, when used appropriately in the educational environment, these obstacles may paradoxically enhance learning. Conclusions: Educational debriefing in prehospital medicine is complex. It requires the establishment of a safe learning environment, an understanding of participant agendas, and facilitator experience to maximise participant learning. Aspects unique to prehospital educational debriefing were identified, notably the unpredictable debriefing environment, interdisciplinary working, and the paradoxical benefit of educational obstacles for learning. This research also highlights aspects of educational debriefing not extensively detailed in the literature, such as compound participant learning, display of ‘professional honesty’ by facilitators, and facilitator learning, which require further exploration. Future research should also explore educational debriefing in other prehospital services.Keywords: debriefing, prehospital medicine, prehospital medical education, pre-hospital care course
Procedia PDF Downloads 2191323 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms
Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga
Abstract:
Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.Keywords: anomaly detection, clustering, pattern recognition, web sessions
Procedia PDF Downloads 2891322 A Brazilian Study Applied to the Regulatory Environmental Issues of Nanomaterials
Authors: Luciana S. Almeida
Abstract:
Nanotechnology has revolutionized the world of science and technology bringing great expectations due to its great potential of application in the most varied industrial sectors. The same characteristics that make nanoparticles interesting from the point of view of the technological application, these may be undesirable when released into the environment. The small size of nanoparticles facilitates their diffusion and transport in the atmosphere, water, and soil and facilitates the entry and accumulation of nanoparticles in living cells. The main objective of this study is to evaluate the environmental regulatory process of nanomaterials in the Brazilian scenario. Three specific objectives were outlined. The first is to carry out a global scientometric study, in a research platform, with the purpose of identifying the main lines of study of nanomaterials in the environmental area. The second is to verify how environmental agencies in other countries have been working on this issue by means of a bibliographic review. And the third is to carry out an assessment of the Brazilian Nanotechnology Draft Law 6741/2013 with the state environmental agencies. This last one has the aim of identifying the knowledge of the subject by the environmental agencies and necessary resources available in the country for the implementation of the Policy. A questionnaire will be used as a tool for this evaluation to identify the operational elements and build indicators through the Environment of Evaluation Application, a computational application developed for the development of questionnaires. At the end will be verified the need to propose changes in the Draft Law of the National Nanotechnology Policy. Initial studies, in relation to the first specific objective, have already identified that Brazil stands out in the production of scientific publications in the area of nanotechnology, although the minority is in studies focused on environmental impact studies. Regarding the general panorama of other countries, some findings have also been raised. The United States has included the nanoform of the substances in an existing program in the EPA (Environmental Protection Agency), the TSCA (Toxic Substances Control Act). The European Union issued a draft of a document amending Regulation 1907/2006 of the European Parliament and Council to cover the nanoform of substances. Both programs are based on the study and identification of environmental risks associated with nanomaterials taking into consideration the product life cycle. In relation to Brazil, regarding the third specific objective, it is notable that the country does not have any regulations applicable to nanostructures, although there is a Draft Law in progress. In this document, it is possible to identify some requirements related to the environment, such as environmental inspection and licensing; industrial waste management; notification of accidents and application of sanctions. However, it is not known if these requirements are sufficient for the prevention of environmental impacts and if national environmental agencies will know how to apply them correctly. This study intends to serve as a basis for future actions regarding environmental management applied to the use of nanotechnology in Brazil.Keywords: environment; management; nanotecnology; politics
Procedia PDF Downloads 1241321 Calcium Release- Activated Calcium Channels as a Target in Treatment of Allergic Asthma
Authors: Martina Šutovská, Marta Jošková, Ivana Kazimierová, Lenka Pappová, Maroš Adamkov, Soňa Fraňová
Abstract:
Bronchial asthma is characterized by increased bronchoconstrictor responses to provoking agonists, airway inflammation and remodeling. All these processes involve Ca2+ influx through Ca2+-release-activated Ca2+ channels (CRAC) that are widely expressed in immune, respiratory epithelium and airway smooth muscle (ASM) cells. Our previous study pointed on possible therapeutic potency of CRAC blockers using experimental guinea pigs asthma model. Presented work analyzed complex anti-asthmatic effect of long-term administered CRAC blocker, including impact on allergic inflammation, airways hyperreactivity, and remodeling and mucociliary clearance. Ovalbumin-induced allergic inflammation of the airways according to Franova et al. was followed by 14 days lasted administration of CRAC blocker (3-fluoropyridine-4-carboxylic acid, FPCA) in the dose 1.5 mg/kg bw. For comparative purposes salbutamol, budesonide and saline were applied to control groups. The anti-inflammatory effect of FPCA was estimated by serum and bronchoalveolar lavage fluid (BALF) changes in IL-4, IL-5, IL-13 and TNF-α analyzed by Bio-Plex® assay as well as immunohistochemical staining focused on assessment of tryptase and c-Fos positivity in pulmonary samples. The in vivo airway hyperreactivity was evaluated by Pennock et al. and by organ tissue bath methods in vitro. The immunohistochemical changes in ASM actin and collagen III layer as well as mucin secretion evaluated anti-remodeling effect of FPCA. The measurement of ciliary beat frequency (CBF) in vitro using LabVIEW™ Software determined impact on mucociliary clearance. Long-term administration of FPCA to sensitized animals resulted in: i. Significant decrease in cytokine levels, tryptase and c-Fos positivity similar to budesonide effect; ii.Meaningful decrease in basal and bronchoconstrictors-induced in vivo and in vitro airway hyperreactivity comparable to salbutamol; iii. Significant inhibition of airway remodeling parameters; iv. Insignificant changes in CBF. All these findings confirmed complex anti-asthmatic effect of CRAC channels blocker and evidenced these structures as the rational target in the treatment of allergic bronchial asthma.Keywords: allergic asthma, CRAC channels, cytokines, respiratory epithelium
Procedia PDF Downloads 5231320 Pareto Optimal Material Allocation Mechanism
Authors: Peter Egri, Tamas Kis
Abstract:
Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling
Procedia PDF Downloads 3341319 Synergy Surface Modification for High Performance Li-Rich Cathode
Authors: Aipeng Zhu, Yun Zhang
Abstract:
The growing grievous environment problems together with the exhaustion of energy resources put urgent demands for developing high energy density. Considering the factors including capacity, resource and environment, Manganese-based lithium-rich layer-structured cathode materials xLi₂MnO₃⋅(1-x)LiMO₂ (M = Ni, Co, Mn, and other metals) are drawing increasing attention due to their high reversible capacities, high discharge potentials, and low cost. They are expected to be one type of the most promising cathode materials for the next-generation Li-ion batteries (LIBs) with higher energy densities. Unfortunately, their commercial applications are hindered with crucial drawbacks such as poor rate performance, limited cycle life and continuous falling of the discharge potential. With decades of extensive studies, significant achievements have been obtained in improving their cyclability and rate performances, but they cannot meet the requirement of commercial utilization till now. One major problem for lithium-rich layer-structured cathode materials (LLOs) is the side reaction during cycling, which leads to severe surface degradation. In this process, the metal ions can dissolve in the electrolyte, and the surface phase change can hinder the intercalation/deintercalation of Li ions and resulting in low capacity retention and low working voltage. To optimize the LLOs cathode material, the surface coating is an efficient method. Considering the price and stability, Al₂O₃ was used as a coating material in the research. Meanwhile, due to the low initial Coulombic efficiency (ICE), the pristine LLOs was pretreated by KMnO₄ to increase the ICE. The precursor was prepared by a facile coprecipitation method. The as-prepared precursor was then thoroughly mixed with Li₂CO₃ and calcined in air at 500℃ for 5h and 900℃ for 12h to produce Li₁.₂[Ni₀.₂Mn₀.₆]O₂ (LNMO). The LNMO was then put into 0.1ml/g KMnO₄ solution stirring for 3h. The resultant was filtered and washed with water, and dried in an oven. The LLOs obtained was dispersed in Al(NO₃)₃ solution. The mixture was lyophilized to confer the Al(NO₃)₃ was uniformly coated on LLOs. After lyophilization, the LLOs was calcined at 500℃ for 3h to obtain LNMO@LMO@ALO. The working electrodes were prepared by casting the mixture of active material, acetylene black, and binder (polyvinglidene fluoride) dissolved in N-methyl-2-pyrrolidone with a mass ratio of 80: 15: 5 onto an aluminum foil. The electrochemical performance tests showed that the multiple surface modified materials had a higher initial Coulombic efficiency (84%) and better capacity retention (91% after 100 cycles) compared with that of pristine LNMO (76% and 80%, respectively). The modified material suggests that the KMnO₄ pretreat and Al₂O₃ coating can increase the ICE and cycling stability.Keywords: Li-rich materials, surface coating, lithium ion batteries, Al₂O₃
Procedia PDF Downloads 135