Search results for: attention measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6752

Search results for: attention measurement

1292 Thus Spoke the Mouth: Problematizing Dalit Voice in Selected Poems

Authors: Barnali Saha

Abstract:

Dalit writing is the interventionalist voice of the dispossessed subaltern in the cultural economy of the society. As such, Dalit writing, including Dalit poetry, considers the contradictions that permeate the socio-cultural structure historically allocated and religiously sanctioned in the Indian subcontinent. As an epicenter of all Dalit experiences of trauma and violence, the poetics the Dalit body is deeply rooted in the peripheral space socially assigned to it by anachronistic caste-based litigation. An appraisal of Dalit creative-critical work by writers like Sharan Kumar Limbale, Arjun Dangle, Namdeo Dhasal, Om Prakash Valmiki, Muktibodh and others underscore the conjunction of the physical, psychical and the psychological in their interpretation of Dalit consciousness. They put forward the idea that Dalit poetry is begotten by the trauma of societal oppression and therefore, Dalit language and its revitalization are two elements obdurately linked to Dalit poetics. The present research paper seeks to read the problematization of the Dalit agency through the conduit of the Dalit voice wherein the anatomical category of the mouth is closely related to the question of Dalit identity. Theoretically aligned to Heidegger’s notion of language as the house of being and Bachelard’s assertion of a house as an ideal metaphor of poetic imagination and Dylan Trigg’s view of the coeval existence of space and body, the paper examines a series of selected poems by Dalit poetic voices to examine how their distinct Dalit point of view underscores Dalit speech and directs our attention to the historical abstraction of it. The paper further examines how speech as a category in Dalit writing places the Dalit somatic entity as a site of contestation with the ‘Mouth’ as a loaded symbolic category inspiring rebellion and resistance. And as the quintessential purpose of Dalit literature is the unleashing of Dalit voice from the anti-verbal domain of social decrepitude, Dalit poetry needs to be critically read based on the experience of the mouth and the patois.

Keywords: Dalit, poetry, speech, mouth, subaltern, minority, exploitation, space

Procedia PDF Downloads 195
1291 Real-Time Working Environment Risk Analysis with Smart Textiles

Authors: Jose A. Diaz-Olivares, Nafise Mahdavian, Farhad Abtahi, Kaj Lindecrantz, Abdelakram Hafid, Fernando Seoane

Abstract:

Despite new recommendations and guidelines for the evaluation of occupational risk assessments and their prevention, work-related musculoskeletal disorders are still one of the biggest causes of work activity disruption, productivity loss, sick leave and chronic work disability. It affects millions of workers throughout Europe, with a large-scale economic and social burden. These specific efforts have failed to produce significant results yet, probably due to the limited availability and high costs of occupational risk assessment at work, especially when the methods are complex, consume excessive resources or depend on self-evaluations and observations of poor accuracy. To overcome these limitations, a pervasive system of risk assessment tools in real time has been developed, which has the characteristics of a systematic approach, with good precision, usability and resource efficiency, essential to facilitate the prevention of musculoskeletal disorders in the long term. The system allows the combination of different wearable sensors, placed on different limbs, to be used for data collection and evaluation by a software solution, according to the needs and requirements in each individual working environment. This is done in a non-disruptive manner for both the occupational health expert and the workers. The creation of this solution allows us to attend different research activities that require, as an essential starting point, the recording of data with ergonomic value of very diverse origin, especially in real work environments. The software platform is here presented with a complimentary smart clothing system for data acquisition, comprised of a T-shirt containing inertial measurement units (IMU), a vest sensorized with textile electronics, a wireless electrocardiogram (ECG) and thoracic electrical bio-impedance (TEB) recorder and a glove sensorized with variable resistors, dependent on the angular position of the wrist. The collected data is processed in real-time through a mobile application software solution, implemented in commercially available Android-based smartphones and tablet platforms. Based on the collection of this information and its analysis, real-time risk assessment and feedback about postural improvement is possible, adapted to different contexts. The result is a tool which provides added value to ergonomists and occupational health agents, as in situ analysis of postural behavior can assist in a quantitative manner in the evaluation of work techniques and the occupational environment.

Keywords: ergonomics, mobile technologies, risk assessment, smart textiles

Procedia PDF Downloads 118
1290 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 169
1289 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery

Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser

Abstract:

Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.

Keywords: adverse events, global trigger tool, patient safety, record review

Procedia PDF Downloads 249
1288 The Mediating Role of Social Connectivity in the Effect of Positive Personality and Alexithymia on Life Satisfaction: Analysis Based on Structural Equation Model

Authors: Yulin Zhang, Kaixi Dong, Guozhen Zhao

Abstract:

Background: Different levels of life satisfaction are associated with some individual differences. Understanding the mechanism between them will help to enhance an individual’s well-being. On the one hand, traditional personality such as extraversion has been considered as the most stable and effective factor in predicting life satisfaction to the author’s best knowledge. On the other, individual emotional difference, such as alexithymia (difficulties identifying and describing one’s own feelings), is also closely related to life satisfaction. With the development of positive psychology, positive personalities such as virtues attract wide attention. And according to the broaden-and-build theory, social connectivity may mediate between emotion and life satisfaction. Therefore, the current study aims to explore the mediating role of social connectivity in the effect of positive personality and alexithymia on life satisfaction. Method: This study was conducted with 318 healthy Chinese college students whose age range from 18 to 30. Positive personality (including interpersonal, vitality, and cautiousness) was measured by the Chinese version of Values in Action Inventory of Strengths (VIA-IS). Alexithymia was measured by the Toronto Alexithymia Scale (TAS), and life satisfaction was measured by Satisfaction With Life Scale (SWLS). And social connectivity was measured by six items which have been used in previous studies. Each scale showed high reliability and validity. The mediating model was examined in Mplus 7.2 within a structural equation modeling (SEM) framework. Findings: The model fitted well and results revealed that both positive personality (95% confidence interval of indirect effect was [0.023, 0.097]) and alexithymia (95% confidence interval of indirect effect was [-0.270, -0.089]) predicted life satisfaction level significantly through social connectivity. Also, only positive personality significantly and directly predicted life satisfaction compared to alexithymia (95% confidence interval of direct effect was [0.109, 0.260]). Conclusion: Alexithymia predicts life satisfaction only through social connectivity, which emphasizes the importance of social bonding in enhancing the well-being of Chinese college students with alexithymia. And the positive personality can predict life satisfaction directly or through social connectivity, which provides implications for enhancing the well-being of Chinese college students by cultivating their virtue and positive psychological quality.

Keywords: alexithymia, life satisfaction, positive personality, social connectivity

Procedia PDF Downloads 167
1287 The Positive Effects of Social Distancing on Individual Work Outcomes in the Context of COVID-19

Authors: Fan Wei, Tang Yipeng

Abstract:

The outbreak of COVID-19 in early 2020 has been raging around the world, which has severely affected people's work and life. In today's post-pandemic era, although the pandemic has been effectively controlled, people still need to maintain social distancing at all times to prevent the further spread of the virus. Based on this, social distancing in the context of the pandemic has aroused widespread attention from scholars. At present, most studies exploring the influencing factors of social distancing are studying the negative impact of social distancing on the physical and mental state of special groups from the inter-individual level, and their more focus on the forced complete social distancing during the severe period of the pandemic. Few studies have focused on the impact of social distancing on working groups in the post-pandemic era from the within-individual level. In order to explore this problem, this paper constructs a cross-level moderating model based on resource conservation theory from the perspective of psychological resources. A total of 81 subjects were recruited to fill in the three-stage questionnaires each day for 10 working days, and 661valid questionnaires were finally obtained. Through the empirical tests, the following conclusions were finally obtained: (1) At the within-individual level, daily social distancing is positively correlated with the second day’s recovery, and the individual’s low sociability regulates the relationship between social distancing and recovery. The indirect effect of daily social distancing through recovery has positive relationship employees’ work engagement and work-goal progress only when the individual has low sociability. For individuals with high sociability, none of these paths are significant. (2) At the within-individual level, there is a significant relationship between individual's recovery and work engagement and work-goal progress, indicating that the recovery of resources can produce positive work outcomes. According to the results, this study believes that in the post-pandemic era, social distancing can not only effectively prevent and control the pandemic but also have positive impacts. Employees can use the time and energy originally saved for social activities through social distancing to invest in things that can provide resources and help them recover.

Keywords: social distancing, recovery, work engagement, work goal progress, sociability

Procedia PDF Downloads 133
1286 Drawbacks of Second Generation Urban Re-Development in Addis Ababa

Authors: Ezana Haddis Weldeghebrael

Abstract:

Addis Ababa City Administration is engaged in a massive facelift of the inner-city. The paper, therefore, aims to analyze the challenges of the current urban regeneration effort by paying special attention to Lideta and Basha Wolde Chilot projects. To this end, the paper has adopted a documentary research strategy to collect the data and Institutionalist perspective as well as the concept of urban regeneration to analyze the data. The sources were selected based on relevance and recency. Academic research outputs were used primarily. However, where much scholastic publications are not available institutional reports, newspaper articles, and expert presentations were used. The major findings of the research revealed that although the second generation of urban redevelopment projects have attempted to involve affected groups and succeeded in designing better neighborhoods, they are riddled with three major drawbacks. The first one is institutional constraints, i.e. absence of urban redevelopment strategy as well as housing policy, broad definition of ‘public purpose’, little regard for informal businesses, limitation on rights groups, negotiation power not devolved at sub-city level and no plan for groups that cannot afford to pay the down payment for low-cost apartments. The second one is planning limitation, i.e. absence of genuine affected group participation as well as consultative level of public engagement. The third one is implementation failure, i.e. no regard to maintaining social bond, non-participatory and ill-informed resettlement, interference from senior government officials, failure to protect the poor from speculators, corruption and disregard to heritage buildings. Based on the findings, the paper concluded that the current inner-city redevelopment has failed to be socially sustainable and calls for enactment of housing policy as well as redevelopment strategy, affected group participation, on-site resettlement, empowering the Sub-city to manage the project and allowing housing rights groups to advocate for the poor slum dwellers.

Keywords: participation, redevelopment, planning, implementation, consultation

Procedia PDF Downloads 427
1285 In Silico Study of Cell Surface Structures of Parabacteroides distasonis Involved in Its Maintain Within the Gut Microbiota and Its Potential Pathogenicity

Authors: Jordan Chamarande, Lisiane Cunat, Corentine Alauzet, Catherine Cailliez-Grimal

Abstract:

Gut microbiota (GM) is now considered a new organ mainly due to the microorganism’s specific biochemical interaction with its host. Although mechanisms underlying host-microbiota interactions are not fully described, it is now well-defined that cell surface molecules and structures of the GM play a key role in such relation. The study of surface structures of GM members is also fundamental for their role in the establishment of species in the versatile and competitive environment of the digestive tract and as a potential virulence factor. Among these structures are capsular polysaccharides (CPS), fimbriae, pili and lipopolysaccharides (LPS), all well-described for their central role in microorganism colonization and communication with host epithelium. The health-promoting Parabacteroides distasonis, which is part of the core microbiome, has recently received a lot of attention, showing beneficial properties for its host and as a new potential biotherapeutic product. However, to the best of the authors’ knowledge, the cell surface molecules and structures of P. distasonis that allow its maintain within the GM are not identified. Moreover, although P. distasonis is strongly recognized as intestinal commensal species with benefits for its host, it has also been recognized as an opportunistic pathogen. In this study, we reported gene clusters potentially involved in the synthesis of the capsule, fimbriae-like and pili-like cell surface structures in 26 P. distasonis genomes and applied the new RfbA-Typing classification in order to better understand and characterize the beneficial/pathogenic behaviour related to P. distasonis strains. In context, 2 different types of fimbriae, 3 of pilus and up to 14 capsular polysaccharide loci, have been identified over the 26 genomes studied. Moreover, the addition of data to the rfbA-Type classification modified the outcome by rearranging rfbA genes and adding a fifth group to the classification. In conclusion, the strain variability in terms of external proteinaceous structure could explain the inter-strain differences previously observed in P. distasonis adhesion capacities and its potential pathogenicity.

Keywords: gut microbiota, Parabacteroides distasonis, capsular polysaccharide, fimbriae, pilus, O-antigen, pathogenicity, probiotic, comparative genomics

Procedia PDF Downloads 103
1284 Identifying Necessary Words for Understanding Academic Articles in English as a Second or a Foreign Language

Authors: Stephen Wagman

Abstract:

This paper identifies three common structures in English sentences that are important for understanding academic texts, regardless of the characteristics or background of the readers or whether they are reading English as a second or a foreign language. Adapting a model from the Humanities, the explication of texts used in literary studies, the paper analyses sample sentences to reveal structures that enable the reader not only to decide which words are necessary for understanding the main ideas but to make the decision without knowing the meaning of the words. By their very syntax noun structures point to the key word for understanding them. As a rule, the key noun is followed by easily identifiable prepositions, relative pronouns, or verbs and preceded by single adjectives. With few exceptions, the modifiers are unnecessary for understanding the idea of the sentence. In addition, sentences are often structured by lists in which the items frequently consist of parallel groups of words. The principle of a list is that all the items are similar in meaning and it is not necessary to understand all of the items to understand the point of the list. This principle is especially important when the items are long or there is more than one list in the same sentence. The similarity in meaning of these items enables readers to reduce sentences that are hard to grasp to an understandable core without excessive use of a dictionary. Finally, the idea of subordination and the identification of the subordinate parts of sentences through connecting words makes it possible for readers to focus on main ideas without having to sift through the less important and more numerous secondary structures. Sometimes a main idea requires a subordinate one to complete its meaning, but usually, subordinate ideas are unnecessary for understanding the main point of the sentence and its part in the development of the argument from sentence to sentence. Moreover, the connecting words themselves indicate the functions of the subordinate structures. These most frequently show similarity and difference or reasons and results. Recognition of all of these structures can not only enable students to read more efficiently but to focus their attention on the development of the argument and this rather than a multitude of unknown vocabulary items, the repetition in lists, or the subordination in sentences are the one necessary element for comprehension of academic articles.

Keywords: development of the argument, lists, noun structures, subordination

Procedia PDF Downloads 246
1283 Organic Rejection and Membrane Fouling with Inorganic Alumina Membrane for Industrial Wastewater Treatment

Authors: Rizwan Ahmad, Soomin Chang, Daeun Kwon, Jeonghwan Kim

Abstract:

Interests in an inorganic membrane are growing rapidly for industrial wastewater treatment due to its excellent chemical and thermal stability over polymeric membrane. Nevertheless, understanding of the membrane rejection and fouling rate caused by the deposit of contaminants on membrane surface and within membrane pores through inorganic porous membranes still requires much attention. Microfiltration alumina membranes were developed and applied for the industrial wastewater treatment to investigate rejection efficiency of organic contaminant and membrane fouling at various operational conditions. In this study, organic rejection and membrane fouling were investigated by using the alumina flat-tubular membrane developed for the treatment of industrial wastewaters. The flat-tubular alumina membranes were immersed in a fluidized membrane reactor added with granular activated carbon (GAC) particles. Fluidization was driven by recirculating a bulk industrial wastewater along membrane surface through the reactor. In the absence of GAC particles, for hazardous anionic dye contaminants, functional group characterized by the organic contaminant was found as one of the main factors affecting both membrane rejection and fouling rate. More fouling on the membrane surface led to the existence of dipolar characterizations and this was more pronounced at lower solution pH, thereby improving membrane rejection accordingly. Similar result was observed with a real metal-plating wastewater. Strong correlation was found that higher fouling rate resulted in higher organic rejection efficiency. Hydrophilicity exhibited by alumina membrane improved the organic rejection efficiency of the membrane due to the formation of hydrophilic fouling layer deposited on it. In addition, less surface roughness of alumina membrane resulted in less fouling rate. Regardless of the operational conditions applied in this study, fluidizing the GAC particles along the surface of alumina membrane was very effective to enhance organic removal efficiency higher than 95% and provide an excellent tool to reduce membrane fouling. Less than 0.1 bar as suction pressure was maintained with the alumina membrane at 25 L/m²hr of permeate set-point flux during the whole operational periods without performing any backwashing and chemical enhanced cleaning for the membrane.

Keywords: alumina membrane, fluidized membrane reactor, industrial wastewater, membrane fouling, rejection

Procedia PDF Downloads 167
1282 Decommissioning of Nuclear Power Plants: The Current Position and Requirements

Authors: A. Stifi, S. Gentes

Abstract:

Undoubtedly from construction's perspective, the use of explosives will remove a large facility such as a 40-storey building , that took almost 3 to 4 years for construction, in few minutes. Usually, the reconstruction or decommissioning, the last phase of life cycle of any facility, is considered to be the shortest. However, this is proved to be wrong in the case of nuclear power plant. Statistics says that in the last 30 years, the construction of a nuclear power plant took an average time of 6 years whereas it is estimated that decommissioning of such plants may take even a decade or more. This paper is all about the decommissioning phase of a nuclear power plant which needs to be given more attention and encouragement from the research institutes as well as the nuclear industry. Currently, there are 437 nuclear power reactors in operation and 70 reactors in construction. With around 139 nuclear facilities already been shut down and are in different decommissioning stages and approximately 347 nuclear reactors will be in decommissioning phase in the next 20 years (assuming the operation time of a reactor as 40 years), This fact raises the following two questions (1) How far is the nuclear and construction Industry ready to face the challenges of decommissioning project? (2) What is required for a safety and reliable decommissioning project delivery? The decommissioning of nuclear facilities across the global have severe time and budget overruns. Largely the decommissioning processes are being executed by the force of manual labour where the change in regulations is respectively observed. In term of research and development, some research projects and activities are being carried out in this area, but the requirement seems to be much more. The near future of decommissioning shall be better through a sustainable development strategy where all stakeholders agree to implement innovative technologies especially for dismantling and decontamination processes and to deliever a reliable and safety decommissioning. The scope of technology transfer from other industries shall be explored. For example, remotery operated robotic technologies used in automobile and production industry to reduce time and improve effecincy and saftey shall be tried here. However, the innovative technologies are highly requested but they are alone not enough, the implementation of creative and innovative management methodologies should be also investigated and applied. Lean Management with it main concept "elimination of waste within process", is a suitable example here. Thus, the cooperation between international organisations and related industries and the knowledge-sharing may serve as a key factor for the successful decommissioning projects.

Keywords: decommissioning of nuclear facilities, innovative technology, innovative management, sustainable development

Procedia PDF Downloads 471
1281 The Cost of Healthcare among Malaysian Community-Dwelling Elderly with Dementia

Authors: Roshanim Koris, Norashidah Mohamed Nor, Sharifah Azizah Haron, Normaz Wana Ismail, Syed Mohamed Aljunid Syed Junid, Amrizal Muhammad Nur, Asrul Akmal Shafie, Suraya Yusuff, Namaitijiang Maimaiti

Abstract:

An ageing population has huge implications for virtually every aspect of Malaysian societies. The elderly consume a greater volume of healthcare facilities not because they are older, but because of they are sick. The chronic comorbidities and deterioration of cognitive ability would lead the elderly’s health to become worst. This study aims to provide a comprehensive estimate of the direct and indirect costs of health care used in a nationally representative sample of community-dwelling elderly with dementia and as well as the determinants of healthcare cost. A survey using multi-stage random sampling techniques recruited a final sample of 2274 elderly people (60 years and above) in the state of Johor, Perak, Selangor and Kelantan. Mini Mental State Examination (MMSE) score was used to measure the cognitive capability among the elderly. Only the elderly with a score less than 19 marks were selected for further analysis and were classified as dementia. By using a two-part model findings also indicate household income and education level are variables that strongly significantly influence the healthcare cost among elderly with dementia. A number of visits and admission are also significantly affect healthcare expenditure. The comorbidity that highly influences healthcare cost is cancer and seeking the treatment in private facilities is also significantly affected the healthcare cost among the demented elderly. The level of dementia severity is not significant in determining the cost. This study is expected to attract the government's attention and act as a wake-up call for them to be more concerned about the elderly who are at high risk of having chronic comorbidities and cognitive problems by providing more appropriate health and social care facilities. The comorbidities are one of the factor that could cause dementia among elderly. It is hoped that this study will promote the issues of dementia as a priority in public health and social care in Malaysia.

Keywords: ageing population, dementia, elderly, healthcare cost, healthcare utiliztion

Procedia PDF Downloads 206
1280 Relevance Of Cognitive Rehabilitation Amongst Children Having Chronic Illnesses – A Theoretical Analysis

Authors: Pulari C. Milu Maria Anto

Abstract:

Background: Cognitive Rehabilitation/Retraining has been variously used in the research literature to represent non-pharmacological interventions that target the cognitive impairments with the goal of ameliorating cognitive function and functional behaviors to optimize the quality of life. Along with adult’s cognitive impairments, the need to address acquired cognitive impairments (due to any chronic illnesses like CHD - congenital heart diseases or ALL - Acute Lymphoblastic Leukemia) among child populations is inevitable. Also, it has to be emphasized as same we consider the cognitive impairments seen in the children having neurodevelopmental disorders. Methods: All published brain image studies (Hermann, B. et al,2002, Khalil, A. et al., 2004, Follin, C. et al, 2016, etc.) and studies emphasizing cognitive impairments in attention, memory, and/or executive function and behavioral aspects (Henkin, Y. et al,2007, Bellinger, D. C., & Newburger, J. W. (2010), Cheung, Y. T., et al,2016, that could be identified were reviewed. Based on a systematic review of the literature from (2000 -2021) different brain imaging studies, increased risk of neuropsychological and psychosocial impairments are briefly described. Clinical and research gap in the area is discussed. Results:30 papers, both Indian studies and foreign publications (Sage journals, Delhi psychiatry journal, Wiley Online Library, APA PsyNet, Springer, Elsevier, Developmental medicine, and child neurology), were identified. Conclusions: In India, a very limited number of brain imaging studies and neuropsychological studies have done by indicating the cognitive deficits of a child having or undergone chronic illness. None of the studies have emphasized the relevance nor the need of implementingCR among such children, even though its high time to address but still not established yet. The review of the current evidence is to bring out an insight among rehabilitation professionals in establishing a child specific CR and to publish new findings regarding the implementation of CR among such children. Also, this study will be an awareness on considering cognitive aspects of a child having acquired cognitive deficit (due to chronic illness), especially during their critical developmental period.

Keywords: cognitive rehabilitation, neuropsychological impairments, congenital heart diseases, acute lymphoblastic leukemia, epilepsy, and neuroplasticity

Procedia PDF Downloads 180
1279 Electromagnetic-Mechanical Stimulation on PC12 for Enhancement of Nerve Axonal Extension

Authors: E. Nakamachi, K. Matsumoto, K. Yamamoto, Y. Morita, H. Sakamoto

Abstract:

In recently, electromagnetic and mechanical stimulations have been recognized as the effective extracellular environment stimulation technique to enhance the defected peripheral nerve tissue regeneration. In this study, we developed a new hybrid bioreactor by adopting 50 Hz uniform alternative current (AC) magnetic stimulation and 4% strain mechanical stimulation. The guide tube for nerve regeneration is mesh structured tube made of biodegradable polymer, such as polylatic acid (PLA). However, when neural damage is large, there is a possibility that peripheral nerve undergoes necrosis. So it is quite important to accelerate the nerve tissue regeneration by achieving enhancement of nerve axonal extension rate. Therefore, we try to design and fabricate the system that can simultaneously load the uniform AC magnetic field stimulation and the stretch stimulation to cells for enhancement of nerve axonal extension. Next, we evaluated systems performance and the effectiveness of each stimulation for rat adrenal pheochromocytoma cells (PC12). First, we designed and fabricated the uniform AC magnetic field system and the stretch stimulation system. For the AC magnetic stimulation system, we focused on the use of pole piece structure to carry out in-situ microscopic observation. We designed an optimum pole piece structure using the magnetic field finite element analyses and the response surface methodology. We fabricated the uniform AC magnetic field stimulation system as a bio-reactor by adopting analytically determined design specifications. We measured magnetic flux density that is generated by the uniform AC magnetic field stimulation system. We confirmed that measurement values show good agreement with analytical results, where the uniform magnetic field was observed. Second, we fabricated the cyclic stretch stimulation device under the conditions of particular strains, where the chamber was made of polyoxymethylene (POM). We measured strains in the PC12 cell culture region to confirm the uniform strain. We found slightly different values from the target strain. Finally, we concluded that these differences were allowable in this mechanical stimulation system. We evaluated the effectiveness of each stimulation to enhance the nerve axonal extension using PC12. We confirmed that the average axonal extension length of PC12 under the uniform AC magnetic stimulation was increased by 16 % at 96 h in our bio-reactor. We could not confirm that the axonal extension enhancement under the stretch stimulation condition, where we found the exfoliating of cells. Further, the hybrid stimulation enhanced the axonal extension. Because the magnetic stimulation inhibits the exfoliating of cells. Finally, we concluded that the enhancement of PC12 axonal extension is due to the magnetic stimulation rather than the mechanical stimulation. Finally, we confirmed that the effectiveness of the uniform AC magnetic field stimulation for the nerve axonal extension using PC12 cells.

Keywords: nerve cell PC12, axonal extension, nerve regeneration, electromagnetic-mechanical stimulation, bioreactor

Procedia PDF Downloads 265
1278 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption, and GDP for Turkey: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in the VECM suggests negative long-run causalities from consumption of petroleum products and the direct combustion of crude oil, coal and natural gas to GDP. Conversely, positive impacts of CO2 emissions and electricity consumption on GDP are found to be significant in Turkey during the period. There exists a short-run bidirectional relationship between electricity consumption and natural gas consumption. There exists a positive unidirectional causality running from electricity consumption to natural gas consumption, while there exists a negative unidirectional causality running from natural gas consumption to electricity consumption. Moreover, GDP has a negative effect on electricity consumption in Turkey in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Turkey over of period 1980-2010.

Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis

Procedia PDF Downloads 508
1277 Synthesis of Fluorescent PET-Type “Turn-Off” Triazolyl Coumarin Based Chemosensors for the Sensitive and Selective Sensing of Fe⁺³ Ions in Aqueous Solutions

Authors: Aidan Battison, Neliswa Mama

Abstract:

Environmental pollution by ionic species has been identified as one of the biggest challenges to the sustainable development of communities. The widespread use of organic and inorganic chemical products and the release of toxic chemical species from industrial waste have resulted in a need for advanced monitoring technologies for environment protection, remediation and restoration. Some of the disadvantages of conventional sensing methods include expensive instrumentation, well-controlled experimental conditions, time-consuming procedures and sometimes complicated sample preparation. On the contrary, the development of fluorescent chemosensors for biological and environmental detection of metal ions has attracted a great deal of attention due to their simplicity, high selectivity, eidetic recognition, rapid response and real-life monitoring. Coumarin derivatives S1 and S2 (Scheme 1) containing 1,2,3-triazole moieties at position -3- have been designed and synthesized from azide and alkyne derivatives by CuAAC “click” reactions for the detection of metal ions. These compounds displayed a strong preference for Fe3+ ions with complexation resulting in fluorescent quenching through photo-induced electron transfer (PET) by the “sphere of action” static quenching model. The tested metal ions included Cd2+, Pb2+, Ag+, Na+, Ca2+, Cr3+, Fe3+, Al3+, Cd2+, Ba2+, Cu2+, Co2+, Hg2+, Zn2+ and Ni2+. The detection limits of S1 and S2 were determined to be 4.1 and 5.1 uM, respectively. Compound S1 displayed the greatest selectivity towards Fe3+ in the presence of competing for metal cations. S1 could also be used for the detection of Fe3+ in a mixture of CH3CN/H¬2¬O. Binding stoichiometry between S1 and Fe3+ was determined by using both Jobs-plot and Benesi-Hildebrand analysis. The binding was shown to occur in a 1:1 ratio between the sensor and a metal cation. Reversibility studies between S1 and Fe3+ were conducted by using EDTA. The binding site of Fe3+ to S1 was determined by using 13 C NMR and Molecular Modelling studies. Complexation was suggested to occur between the lone-pair of electrons from the coumarin-carbonyl and the triazole-carbon double bond.

Keywords: chemosensor, "click" chemistry, coumarin, fluorescence, static quenching, triazole

Procedia PDF Downloads 163
1276 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind System: Case Study

Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar

Abstract:

Having a daylit space together with view results in a pleasant and productive environment for office employees. A daylit space is a space which utilizes daylight as a basic source of illumination to fulfill user’s visual demands and minimizes the electric energy consumption. Malaysian weather is hot and humid all over the year because of its location in the equatorial belt. however, because most of the commercial buildings in Malaysia are air-conditioned, huge glass windows are normally installed in order to keep the physical and visual relation between inside and outside. As a result of climatic situation and mentioned new trend, an ordinary office has huge heat gain, glare, and discomfort for occupants. Balancing occupant’s comfort and energy conservation in a tropical climate is a real challenge. This study concentrates on evaluating a venetian blind system using per pixel analyzing tools based on the suggested cut-out metrics by the literature. Workplace area in a private office room has been selected as a case study. Eight-day measurement experiment was conducted to investigate the effect of different venetian blind angles in an office area under daylight conditions in Serdang, Malaysia. The study goal was to explore daylight comfort of a commercially available venetian blind system, its’ daylight sufficiency and excess (8:00 AM to 5 PM) as well as Glare examination. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based Evalglare and hdrscope help to investigate luminance-based metrics. The main key factors are illuminance and luminance levels, mean and maximum luminance, daylight glare probability (DGP) and luminance ratio of the selected mask regions. The findings show that in most cases, morning session needs artificial lighting in order to achieve daylight comfort. However, in some conditions (e.g. 10° and 40° slat angles) in the second half of day the workplane illuminance level exceeds the maximum of 2000 lx. Generally, a rising trend is discovered toward mean window luminance and the most unpleasant cases occur after 2 P.M. Considering the luminance criteria rating, the uncomfortable conditions occur in the afternoon session. Surprisingly in no blind condition, extreme case of window/task ratio is not common. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment.

Keywords: daylighting, energy simulation, office environment, Venetian blind

Procedia PDF Downloads 259
1275 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions

Authors: Timothy Kayode Samson, Adedoyin Isola Lawal

Abstract:

The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.

Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH

Procedia PDF Downloads 119
1274 Dematerialized Beings in Katherine Dunn's Geek Love: A Corporeal and Ethical Study under Posthumanities

Authors: Anum Javed

Abstract:

This study identifies the dynamical image of human body that continues its metamorphosis in the virtual field of reality. It calls attention to the ways where humans start co-evolving with other life forms; technology in particular and are striving to establish a realm outside the physical framework of matter. The problem exceeds the area of technological ethics by explicably and explanatorily entering the space of literary texts and criticism. Textual analysis of Geek Love (1989) by Katherine Dunn is adjoined with posthumanist perspectives of Pramod K. Nayar to beget psycho-somatic changes in man’s nature of being. It uncovers the meaning people give to their experiences in this budding social and cultural phenomena of material representation tied up with personal practices and technological innovations. It also observes an ethical, physical and psychological reassessment of man within the context of technological evolutions. The study indicates the elements that have rendered morphological freedom and new materialism in man’s consciousness. Moreover this work is inquisitive of what it means to be a human in this time of accelerating change where surgeries, implants, extensions, cloning and robotics have shaped a new sense of being. It attempts to go beyond individual’s body image and explores how objectifying media and culture have influenced people’s judgement of others on new material grounds. It further argues a decentring of the glorified image of man as an independent entity because of his energetic partnership with intelligent machines and external agents. The history of the future progress of technology is also mentioned. The methodology adopted is posthumanist techno-ethical textual analysis. This work necessitates a negotiating relationship between man and technology in order to achieve harmonic and balanced interconnected existence. The study concludes by recommending a call for an ethical set of codes to be cultivated for the techno-human habituation. Posthumanism ushers a strong need of adopting new ethics within the terminology of neo-materialist humanism.

Keywords: corporeality, dematerialism, human ethos, posthumanism

Procedia PDF Downloads 147
1273 The Challenge of Assessing Social AI Threats

Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Abstract:

The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.

Keywords: social threats, artificial Intelligence, mitigation, social experiment

Procedia PDF Downloads 65
1272 A Phenomenographic Examination of Work Motivation to Perform at the Municipal Corporation of Bangladesh

Authors: Md. Rifad Chowdhury

Abstract:

This research study investigates employees' conception of work motivation to perform at the municipal corporation in Bangladesh. The municipal corporation is one of the key administrative bodies of Bangladesh’s local government. Municipal corporation employees provide essential public services in the country’s semi-urban areas. Work motivation has been defined as a result of interaction between the individual and the environment. Local government studies indicate the work environment of the municipal corporation is unique because of its key colonial and political history, several reform attempts, non-western social perspectives, job functions, and traditional governance. The explorative purpose of this study is to find and analyse the conceptions of employees’ work motivation within this environment to expand a better understanding of work motivation. According to the purpose of this study, a qualitative method has been adopted, which has remained a very unpopular method among work motivational researchers in Bangladesh. Twenty-two semi-structured online interviews were conducted in this study. Phenomenographic research methodology has been adopted to describe the limited number of qualitatively different ways of experiencing work motivation. During the analysis of the semi-structured interview transcripts, the focus was on the employees' perspectives as employees experience work motivation or the second-order perspective to explore and analyse the conceptions. Based on the participants' collective experiences and dimensions of variation across the different ways of experiencing, six conceptions of employee work motivation to perform at the municipal corporation were identified in this study. The relationships between conceptions were further elaborated in terms of critical variations across the conceptions. Six dimensions of critical variations have emerged within and between the conceptions. In the outcome space, the relationships between conceptions and dimensions of critical variations are presented in a logical structure. The findings of this research study show significance to expand the understanding of work motivation and the research context of phenomenography. The findings of this research will contribute to the ongoing attention of contextual work motivational understanding from a Bangladeshi perspective and phenomenographic research conceptions in organisational behaviour studies.

Keywords: work motivation, qualitative, phenomenography, local government

Procedia PDF Downloads 92
1271 Elastic Behaviour of Graphene Nanoplatelets Reinforced Epoxy Resin Composites

Authors: V. K. Srivastava

Abstract:

Graphene has recently attracted an increasing attention in nanocomposites applications because it has 200 times greater strength than steel, making it the strongest material ever tested. Graphene, as the fundamental two-dimensional (2D) carbon structure with exceptionally high crystal and electronic quality, has emerged as a rapidly rising star in the field of material science. Graphene, as defined, as a 2D crystal, is composed of monolayers of carbon atoms arranged in a honeycombed network with six-membered rings, which is the interest of both theoretical and experimental researchers worldwide. The name comes from graphite and alkene. Graphite itself consists of many graphite-sheets stacked together by weak van der Waals forces. This is attributed to the monolayer of carbon atoms densely packed into honeycomb structure. Due to superior inherent properties of graphene nanoplatelets (GnP) over other nanofillers, GnP particles were added in epoxy resin with the variation of weight percentage. It is indicated that the DMA results of storage modulus, loss modulus and tan δ, defined as the ratio of elastic modulus and imaginary (loss) modulus versus temperature were affected with addition of GnP in the epoxy resin. In epoxy resin, damping (tan δ) is usually caused by movement of the molecular chain. The tan δ of the graphene nanoplatelets/epoxy resin composite is much lower than that of epoxy resin alone. This finding suggests that addition of graphene nanoplatelets effectively impedes movement of the molecular chain. The decrease in storage modulus can be interpreted by an increasing susceptibility to agglomeration, leading to less energy dissipation in the system under viscoelastic deformation. The results indicates the tan δ increased with the increase of temperature, which confirms that tan δ is associated with magnetic field strength. Also, the results show that the nanohardness increases with increase of elastic modulus marginally. GnP filled epoxy resin gives higher value than the epoxy resin, because GnP improves the mechanical properties of epoxy resin. Debonding of GnP is clearly observed in the micrograph having agglomeration of fillers and inhomogeneous distribution. Therefore, DMA and nanohardness studies indiacte that the elastic modulus of epoxy resin is increased with the addition of GnP fillers.

Keywords: agglomeration, elastic modulus, epoxy resin, graphene nanoplatelet, loss modulus, nanohardness, storage modulus

Procedia PDF Downloads 264
1270 Combining Multiscale Patterns of Weather and Sea States into a Machine Learning Classifier for Mid-Term Prediction of Extreme Rainfall in North-Western Mediterranean Sea

Authors: Pinel Sebastien, Bourrin François, De Madron Du Rieu Xavier, Ludwig Wolfgang, Arnau Pedro

Abstract:

Heavy precipitation constitutes a major meteorological threat in the western Mediterranean. Research has investigated the relationship between the states of the Mediterranean Sea and the atmosphere with the precipitation for short temporal windows. However, at a larger temporal scale, the precursor signals of heavy rainfall in the sea and atmosphere have drawn little attention. Moreover, despite ongoing improvements in numerical weather prediction, the medium-term forecasting of rainfall events remains a difficult task. Here, we aim to investigate the influence of early-spring environmental parameters on the following autumnal heavy precipitations. Hence, we develop a machine learning model to predict extreme autumnal rainfall with a 6-month lead time over the Spanish Catalan coastal area, based on i) the sea pattern (main current-LPC and Sea Surface Temperature-SST) at the mesoscale scale, ii) 4 European weather teleconnection patterns (NAO, WeMo, SCAND, MO) at synoptic scale, and iii) the hydrological regime of the main local river (Rhône River). The accuracy of the developed model classifier is evaluated via statistical analysis based on classification accuracy, logarithmic and confusion matrix by comparing with rainfall estimates from rain gauges and satellite observations (CHIRPS-2.0). Sensitivity tests are carried out by changing the model configuration, such as sea SST, sea LPC, river regime, and synoptic atmosphere configuration. The sensitivity analysis suggests a negligible influence from the hydrological regime, unlike SST, LPC, and specific teleconnection weather patterns. At last, this study illustrates how public datasets can be integrated into a machine learning model for heavy rainfall prediction and can interest local policies for management purposes.

Keywords: extreme hazards, sensitivity analysis, heavy rainfall, machine learning, sea-atmosphere modeling, precipitation forecasting

Procedia PDF Downloads 137
1269 Railway Ballast Volumes Automated Estimation Based on LiDAR Data

Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert

Abstract:

The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.

Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point

Procedia PDF Downloads 110
1268 Leader Self-sacrifice in Sports Organizations

Authors: Stefano Ruggieri, Rubinia C. Bonfanti

Abstract:

Research on leadership in sports organizations has proved extremely fruitful in recent decades, favoring the growing and diffusion of figures such as mental coaches, trainers, etc. Recent scholarly attention on organizations has been directed towards the phenomenon of leader self-sacrifice, wherein leaders who display such behavior are perceived by their followers as more effective, charismatic, and legitimate compared to those who prioritize self-interest. This growing interest reflects the importance of leaders who prioritize the collective welfare over personal gain, as they inspire greater loyalty, trust, and dedication among their followers, ultimately fostering a more cohesive and high-performing team environment. However, there is limited literature on the mechanisms through which self-sacrifice influences both group dynamics (such as cohesion and team identification) and individual factors (such as self-competence). The aim of the study is to analyze the impact of the leader self-sacrifice on cohesion, team identification and self-competence. Team identification is a crucial determinant of individual identity, delineated by the extent to which a team member aligns with a specific organizational team rather than broader social collectives. This association motivates members to synchronize their actions with the collective interests of the group, thereby fostering cohesion among its constituents, and cultivating a shared sense of purpose and unity within the team. In the domain of team sports, particularly soccer and water polo, two studies involving 447 participants (men = 238, women = 209) between 22 and 35 years old (M = 26.36, SD = 5.51) were conducted. The first study employed a correlational methodology to investigate the predictive capacity of self-sacrifice on cohesion, team identification, self-efficacy, and self-competence. The second study utilized an experimental design to explore the relationship between team identification and self-sacrifice. Together, these studies provided comprehensive insights into the multifaceted nature of leader self-sacrifice and its profound implications for group cohesion and individual well-being within organizational settings. The findings underscored the pivotal role of leader self-sacrifice in not only fostering stronger bonds among team members but also in enhancing critical facets of group dynamics, ultimately contributing to the overall effectiveness and success of the team.

Keywords: cohesion, leadership, self-sacrifice, sports organizations, team-identification

Procedia PDF Downloads 46
1267 Experimental Study on the Heating Characteristics of Transcritical CO₂ Heat Pumps

Authors: Lingxiao Yang, Xin Wang, Bo Xu, Zhenqian Chen

Abstract:

Due to its outstanding environmental performance, higher heating temperature and excellent low-temperature performance, transcritical carbon dioxide (CO₂) heat pumps are receiving more and more attention. However, improperly set operating parameters have a serious negative impact on the performance of the transcritical CO₂ heat pump due to the properties of CO₂. In this study, the heat transfer characteristics of the gas cooler are studied based on the modified “three-stage” gas cooler, then the effect of three operating parameters, compressor speed, gas cooler water-inlet flowrate and gas cooler water-inlet temperature, on the heating process of the system are investigated from the perspective of thermal quality and heat capacity. The results shows that: In the heat transfer process of gas cooler, the temperature distribution of CO₂ and water shows a typical “two region” and “three zone” pattern; The rise in the cooling pressure of CO₂ serves to increase the thermal quality on the CO₂ side of the gas cooler, which in turn improves the heating temperature of the system; Nevertheless, the elevated thermal quality on the CO₂ side can exacerbate the mismatch of heat capacity on both sides of the gas cooler, thereby adversely affecting the system coefficient of performance (COP); Furthermore, increasing compressor speed mitigates the mismatch in heat capacity caused by elevated thermal quality, which is exacerbated by decreasing gas cooler water-inlet flowrate and rising gas cooler water-inlet temperature; As a delegate, the varying compressor speed results in a 7.1°C increase in heating temperature within the experimental range, accompanied by a 10.01% decrease in COP and an 11.36% increase in heating capacity. This study can not only provide an important reference for the theoretical analysis and control strategy of the transcritical CO₂ heat pump, but also guide the related simulation and the design of the gas cooler. However, the range of experimental parameters in the current study is small and the conclusions drawn are not further analysed quantitatively. Therefore, expanding the range of parameters studied and proposing corresponding quantitative conclusions and indicators with universal applicability could greatly increase the practical applicability of this study. This is also the goal of our next research.

Keywords: transcritical CO₂ heat pump, gas cooler, heat capacity, thermal quality

Procedia PDF Downloads 22
1266 Generation of High-Quality Synthetic CT Images from Cone Beam CT Images Using A.I. Based Generative Networks

Authors: Heeba A. Gurku

Abstract:

Introduction: Cone Beam CT(CBCT) images play an integral part in proper patient positioning in cancer patients undergoing radiation therapy treatment. But these images are low in quality. The purpose of this study is to generate high-quality synthetic CT images from CBCT using generative models. Material and Methods: This study utilized two datasets from The Cancer Imaging Archive (TCIA) 1) Lung cancer dataset of 20 patients (with full view CBCT images) and 2) Pancreatic cancer dataset of 40 patients (only 27 patients having limited view images were included in the study). Cycle Generative Adversarial Networks (GAN) and its variant Attention Guided Generative Adversarial Networks (AGGAN) models were used to generate the synthetic CTs. Models were evaluated by visual evaluation and on four metrics, Structural Similarity Index Measure (SSIM), Peak Signal Noise Ratio (PSNR) Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), to compare the synthetic CT and original CT images. Results: For pancreatic dataset with limited view CBCT images, our study showed that in Cycle GAN model, MAE, RMSE, PSNR improved from 12.57to 8.49, 20.94 to 15.29 and 21.85 to 24.63, respectively but structural similarity only marginally increased from 0.78 to 0.79. Similar, results were achieved with AGGAN with no improvement over Cycle GAN. However, for lung dataset with full view CBCT images Cycle GAN was able to reduce MAE significantly from 89.44 to 15.11 and AGGAN was able to reduce it to 19.77. Similarly, RMSE was also decreased from 92.68 to 23.50 in Cycle GAN and to 29.02 in AGGAN. SSIM and PSNR also improved significantly from 0.17 to 0.59 and from 8.81 to 21.06 in Cycle GAN respectively while in AGGAN SSIM increased to 0.52 and PSNR increased to 19.31. In both datasets, GAN models were able to reduce artifacts, reduce noise, have better resolution, and better contrast enhancement. Conclusion and Recommendation: Both Cycle GAN and AGGAN were significantly able to reduce MAE, RMSE and PSNR in both datasets. However, full view lung dataset showed more improvement in SSIM and image quality than limited view pancreatic dataset.

Keywords: CT images, CBCT images, cycle GAN, AGGAN

Procedia PDF Downloads 83
1265 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil

Authors: Ana Julia C. Kfouri

Abstract:

A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.

Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort

Procedia PDF Downloads 388
1264 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank

Authors: Jalal Haghighat Monfared, Zahra Akbari

Abstract:

Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.

Keywords: business intelligence, business intelligence capability, decision making, decision quality

Procedia PDF Downloads 112
1263 Downregulation of Epidermal Growth Factor Receptor in Advanced Stage Laryngeal Squamous Cell Carcinoma

Authors: Sarocha Vivatvakin, Thanaporn Ratchataswan, Thiratest Leesutipornchai, Komkrit Ruangritchankul, Somboon Keelawat, Virachai Kerekhanjanarong, Patnarin Mahattanasakul, Saknan Bongsebandhu-Phubhakdi

Abstract:

In this globalization era, much attention has been drawn to various molecular biomarkers, which may have the potential to predict the progression of cancer. Epidermal growth factor receptor (EGFR) is the classic member of the ErbB family of membrane-associated intrinsic tyrosine kinase receptors. EGFR expression was found in several organs throughout the body as its roles involve in the regulation of cell proliferation, survival, and differentiation in normal physiologic conditions. However, anomalous expression, whether over- or under-expression is believed to be the underlying mechanism of pathologic conditions, including carcinogenesis. Even though numerous discussions regarding the EGFR as a prognostic tool in head and neck cancer have been established, the consensus has not yet been met. The aims of the present study are to assess the correlation between the level of EGFR expression and demographic data as well as clinicopathological features and to evaluate the ability of EGFR as a reliable prognostic marker. Furthermore, another aim of this study is to investigate the probable pathophysiology that explains the finding results. This retrospective study included 30 squamous cell laryngeal carcinoma patients treated at King Chulalongkorn Memorial Hospital from January 1, 2000, to December 31, 2004. EGFR expression level was observed to be significantly downregulated with the progression of the laryngeal cancer stage. (one way ANOVA, p = 0.001) A statistically significant lower EGFR expression in the late stage of the disease compared to the early stage was recorded. (unpaired t-test, p = 0.041) EGFR overexpression also showed the tendency to increase recurrence of cancer (unpaired t-test, p = 0.128). A significant downregulation of EGFR expression was documented in advanced stage laryngeal cancer. The results indicated that EGFR level correlates to prognosis in term of stage progression. Thus, EGFR expression might be used as a prevailing biomarker for laryngeal squamous cell carcinoma prognostic prediction.

Keywords: downregulation, epidermal growth factor receptor, immunohistochemistry, laryngeal squamous cell carcinoma

Procedia PDF Downloads 111