Search results for: selection of intensity measures
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7562

Search results for: selection of intensity measures

632 Resilience-Vulnerability Interaction in the Context of Disasters and Complexity: Study Case in the Coastal Plain of Gulf of Mexico

Authors: Cesar Vazquez-Gonzalez, Sophie Avila-Foucat, Leonardo Ortiz-Lozano, Patricia Moreno-Casasola, Alejandro Granados-Barba

Abstract:

In the last twenty years, academic and scientific literature has been focused on understanding the processes and factors of coastal social-ecological systems vulnerability and resilience. Some scholars argue that resilience and vulnerability are isolated concepts due to their epistemological origin, while others note the existence of a strong resilience-vulnerability relationship. Here we present an ordinal logistic regression model based on the analytical framework about dynamic resilience-vulnerability interaction along adaptive cycle of complex systems and disasters process phases (during, recovery and learning). In this way, we demonstrate that 1) during the disturbance, absorptive capacity (resilience as a core of attributes) and external response capacity explain the probability of households capitals to diminish the damage, and exposure sets the thresholds about the amount of disturbance that households can absorb, 2) at recovery, absorptive capacity and external response capacity explain the probability of households capitals to recovery faster (resilience as an outcome) from damage, and 3) at learning, adaptive capacity (resilience as a core of attributes) explains the probability of households adaptation measures based on the enhancement of physical capital. As a result, during the disturbance phase, exposure has the greatest weight in the probability of capital’s damage, and households with absorptive and external response capacity elements absorbed the impact of floods in comparison with households without these elements. At the recovery phase, households with absorptive and external response capacity showed a faster recovery on their capital; however, the damage sets the thresholds of recovery time. More importantly, diversity in financial capital increases the probability of recovering other capital, but it becomes a liability so that the probability of recovering the household finances in a longer time increases. At learning-reorganizing phase, adaptation (modifications to the house) increases the probability of having less damage on physical capital; however, it is not very relevant. As conclusion, resilience is an outcome but also core of attributes that interacts with vulnerability along the adaptive cycle and disaster process phases. Absorptive capacity can diminish the damage experienced by floods; however, when exposure overcomes thresholds, both absorptive and external response capacity are not enough. In the same way, absorptive and external response capacity diminish the recovery time of capital, but the damage sets the thresholds in where households are not capable of recovering their capital.

Keywords: absorptive capacity, adaptive capacity, capital, floods, recovery-learning, social-ecological systems

Procedia PDF Downloads 133
631 Culvert Blockage Evaluation Using Australian Rainfall And Runoff 2019

Authors: Rob Leslie, Taher Karimian

Abstract:

The blockage of cross drainage structures is a risk that needs to be understood and managed or lessened through the design. A blockage is a random event, influenced by site-specific factors, which needs to be quantified for design. Under and overestimation of blockage can have major impacts on flood risk and cost associated with drainage structures. The importance of this matter is heightened for those projects located within sensitive lands. It is a particularly complex problem for large linear infrastructure projects (e.g., rail corridors) located within floodplains where blockage factors can influence flooding upstream and downstream of the infrastructure. The selection of the appropriate blockage factors for hydraulic modeling has been subject to extensive research by hydraulic engineers. This paper has been prepared to review the current Australian Rainfall and Runoff 2019 (ARR 2019) methodology for blockage assessment by applying this method to a transport corridor brownfield upgrade case study in New South Wales. The results of applying the method are also validated against asset data and maintenance records. ARR 2019 – Book 6, Chapter 6 includes advice and an approach for estimating the blockage of bridges and culverts. This paper concentrates specifically on the blockage of cross drainage structures. The method has been developed to estimate the blockage level for culverts affected by sediment or debris due to flooding. The objective of the approach is to evaluate a numerical blockage factor that can be utilized in a hydraulic assessment of cross drainage structures. The project included an assessment of over 200 cross drainage structures. In order to estimate a blockage factor for use in the hydraulic model, a process has been advanced that considers the qualitative factors (e.g., Debris type, debris availability) and site-specific hydraulic factors that influence blockage. A site rating associated with the debris potential (i.e., availability, transportability, mobility) at each crossing was completed using the method outlined in ARR 2019 guidelines. The hydraulic results inputs (i.e., flow velocity, flow depth) and qualitative factors at each crossing were developed into an advanced spreadsheet where the design blockage level for cross drainage structures were determined based on the condition relating Inlet Clear Width and L10 (average length of the longest 10% of the debris reaching the site) and the Adjusted Debris Potential. Asset data, including site photos and maintenance records, were then reviewed and compared with the blockage assessment to check the validity of the results. The results of this assessment demonstrate that the estimated blockage factors at each crossing location using ARR 2019 guidelines are well-validated with the asset data. The primary finding of the study is that the ARR 2019 methodology is a suitable approach for culvert blockage assessment that has been validated against a case study spanning a large geographical area and multiple sub-catchments. The study also found that the methodology can be effectively coded within a spreadsheet or similar analytical tool to automate its application.

Keywords: ARR 2019, blockage, culverts, methodology

Procedia PDF Downloads 361
630 The Influence of Hydrolyzed Cartilage Collagen on General Mobility and Wellbeing of an Active Population

Authors: Sara De Pelsmaeker, Catarina Ferreira da Silva, Janne Prawit

Abstract:

Recent studies show that enzymatically hydrolysed collagen is absorbed and distributed to joint tissues, where it has analgesic and active anti-inflammatory properties. Reviews of the associated relevant literature also support this theory. However, these studies are all using hydrolyzed collagen from animal hide or skin. This study looks into the effect of daily supplementation of hydrolyzed cartilage collagen (HCC), which has a different composition. A consumer study was set up using a double-blind placebo-controlled design with a control group using twice a day 0.5gr of maltodextrin and an experimental group using twice 0.5g of HCC, over a trial period of 12 weeks. A follow-up phase of 4 weeks without supplementation was taken into the experiment to investigate the ‘wash-out’ phase. As this consumer study was conducted during the lockdown periods, a specific app was designed to follow up with the participants. The app had the advantage that in this way, the motivation of the participants was enhanced and the drop-out range of participants was lower than normally seen in consumer studies. Participants were recruited via various sports and health clubs across the UK as we targeted a general population of people that considered themselves in good health. Exclusion criteria were ‘not experiencing any medical conditions’ and ‘not taking any prescribed medication’. A minimum requirement was that they regularly engaged in some level of physical activity. The participants had to log the type of activity that they conducted and the duration of the activity. Weekly, participants were providing feedback on their joint health and subjective pain using the validated pain measuring instrument Visual Analogue Scale (VAS). The weekly repoAbstract Public Health and Wellbeing Conferencerting section in the app was designed with simplicity and based on the accuracy demonstrated in previous similar studies to track subjective pain measures of participants. At the beginning of the trial, each participant indicated their baseline on joint pain. The results of this consumer study indicated that HCC significantly improved joint health and subjective pain scores compared to the placebo group. No significant differences were found between different demographic groups (age or gender). The level of activity, going from high intensive training to regular walking, did not significantly influence the effect of the HCC. The results of the wash-out phase indicated that when the participants stopped the HCC supplementation, their subjective pain scores increased again to the baseline. In conclusion, the results gave a positive indication that the daily supplementation of HCC can contribute to the overall mobility and wellbeing of a general active population

Keywords: VAS-score, food supplement, mobility, joint health

Procedia PDF Downloads 162
629 Plantar Neuro-Receptor Activation in Total Knee Arthroplasty Patients: Impact on Clinical Function, Pain, and Stiffness - A Randomized Controlled Trial

Authors: Woolfrey K., Woolfrey M., Bolton C. L., Warchuk D.

Abstract:

Objectives: Osteoarthritis is the most common joint disease of adults worldwide. Despite total knee arthroplasty (TKA) demonstrating high levels of success, 20% of patients report dissatisfaction with their result. VOXX Wellness Stasis Socks are embedded with a proprietary pattern of neuro-receptor activation points that have been proven to activate a precise neuro-response, according to the pattern theory of haptic perception, which stimulates improvements in pain and function. The use of this technology in TKA patients may prove beneficial as an adjunct to recovery as many patients suffer from deficits to their proprioceptive system caused by ligamentous damage and alterations to mechanoreceptors during the procedure. We hypothesized that VOXX Wellness Stasis Socks are a safe, cost-effective, and easily scalable strategy to support TKA patients through their recovery. Design: Double-blinded, placebo-controlled randomized trial. Participants: Patients scheduled to receive TKA were considered eligible for inclusion in the trial. Interventions: Intervention group (I): VOXX Wellness Stasis socks containing receptor point-activation technology. Control group (C): VOXX Wellness Stasis socks without receptor point-activation technology. Sock use during the waking hours x 6 weeks. Main Outcome Measures: Western Ontario McMaster Universities Osteoarthritis Index (WOMAC) questionnaire completed at baseline, 2 weeks, and 6 weeks to assess pain, stiffness, and physical function. Results: Data analysis using SPSS software. P-values, effect sizes, and confidence intervals are reported to assess clinical relevance of the finding. Physical status classifications were compared using t-test. Within-subject and between-subject differences in the mean WOMAC were analyzed by ANOVA. Effect size was analyzed using Cramer’s V. Consistent improvement in WOMAC scores for pain and stiffness at 2 weeks post op in the I over the C group. The womac scores assessing physical function showed a consistent improvement at both 2 and 6 weeks post op in the I group compared to C group. Conclusions: VOXX proved to be a low cost, safe intervention in TKA to help patients improve with regard to pain, stiffness, and physical function. Disclosures: None

Keywords: osteoarthritis, RCT, pain management, total knee arthroplasty

Procedia PDF Downloads 531
628 Mitigation of Indoor Human Exposure to Traffic-Related Fine Particulate Matter (PM₂.₅)

Authors: Ruchi Sharma, Rajasekhar Balasubramanian

Abstract:

Motor vehicles emit a number of air pollutants, among which fine particulate matter (PM₂.₅) is of major concern in cities with high population density due to its negative impacts on air quality and human health. Typically, people spend more than 80% of their time indoors. Consequently, human exposure to traffic-related PM₂.₅ in indoor environments has received considerable attention. Most of the public residential buildings in tropical countries are designed for natural ventilation where indoor air quality tends to be strongly affected by the migration of air pollutants of outdoor origin. However, most of the previously reported traffic-related PM₂.₅ exposure assessment studies relied on ambient PM₂.₅ concentrations and thus, the health impact of traffic-related PM₂.₅ on occupants in naturally ventilated buildings remains largely unknown. Therefore, a systematic field study was conducted to assess indoor human exposure to traffic-related PM₂.₅ with and without mitigation measures in a typical naturally ventilated residential apartment situated near a road carrying a large volume of traffic. Three PM₂.₅ exposure scenarios were simulated in this study, i.e., Case 1: keeping all windows open with a ceiling fan on as per the usual practice, Case 2: keeping all windows fully closed as a mitigation measure, and Case 3: keeping all windows fully closed with the operation of a portable indoor air cleaner as an additional mitigation measure. The indoor to outdoor (I/O) ratios for PM₂.₅ mass concentrations were assessed and the effectiveness of using the indoor air cleaner was quantified. Additionally, potential human health risk based on the bioavailable fraction of toxic trace elements was also estimated for the three cases in order to identify a suitable mitigation measure for reducing PM₂.₅ exposure indoors. Traffic-related PM₂.₅ levels indoors exceeded the air quality guidelines (12 µg/m³) in Case 1, i.e., under natural ventilation conditions due to advective flow of outdoor air into the indoor environment. However, while using the indoor air cleaner, a significant reduction (p < 0.05) in the PM₂.₅ exposure levels was noticed indoors. Specifically, the effectiveness of the air cleaner in terms of reducing indoor PM₂.₅ exposure was estimated to be about 74%. Moreover, potential human health risk assessment also indicated a substantial reduction in potential health risk while using the air cleaner. This is the first study of its kind that evaluated the indoor human exposure to traffic-related PM₂.₅ and identified a suitable exposure mitigation measure that can be implemented in densely populated cities to realize health benefits.

Keywords: fine particulate matter, indoor air cleaner, potential human health risk, vehicular emissions

Procedia PDF Downloads 126
627 A Lung Cancer Patient Grief Counseling Nursing Experience

Authors: Syue-Wen Lin

Abstract:

Objective: This article explores the nursing experience of a 64-year-old female lung cancer patient who underwent a thoracoscopic left lower lobectomy and treatment. The patient has a history of diabetes. The nursing process included cancer treatment, postoperative pain management, wound care and healing, and family grief counseling. Methods: The nursing period is from March 11 to March 15, 2024. During this time, strict aseptic wound dressing procedures and advanced wound care techniques are employed to promote wound healing and prevent infection. Postoperatively, due to the development of aspiration pneumonia and worsening symptoms, re-intubation was necessary. Given the patient's advanced cancer and deteriorating condition, the nursing team provided comprehensive grief counseling and care tailored to both the patient's physical and psychological needs, as well as the emotional needs of the family. Considering the complexity of the patient's condition, including advanced cancer, palliative care was also integrated into the overall nursing process to alleviate discomfort and provide psychological support. Results: Using Gordon's Functional Health Patterns for assessment, including evaluating the patient's medical history, physical assessment, and interviews, to provide individualized nursing care, it is important to collect data that will help understand the patient's physical, psychological, social, and spiritual dimensions. The interprofessional critical care team collaborates with the hospice team to help understand the psychological state of the patient's family and develop a comprehensive approach to care. Family meetings should be convened, and support should be provided to patients during the final stages of their lives. Additionally, the combination of cancer care, pain management, wound care, and palliative care ensures comprehensive support for the patient throughout her recovery, thereby improving her quality of life. Conclusion: Lung cancer and aspiration pneumonia present significant challenges to patients, and the nursing team not only provides critical care but also addresses individual patient needs through cancer care, pain management, wound care, and palliative care interventions. These measures have effectively improved the quality of life of patients, provided compassionate palliative care to terminally ill patients, and allowed them to spend the last mile of their lives with their families. Nursing staff work closely with families to develop comprehensive care plans to ensure patients receive high-quality medical care as well as psychological support and a comfortable recovery environment.

Keywords: grief counseling, lung cancer, palliative care, nursing experience

Procedia PDF Downloads 26
626 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems

Authors: Alexander J. Severinsky

Abstract:

Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.

Keywords: greenhouse radiative force, greenhouse air temperature, greenhouse thermodynamics, greenhouse historical, greenhouse radiative force on ice, greenhouse radiative force on plants, greenhouse radiative force in air

Procedia PDF Downloads 104
625 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study

Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni

Abstract:

Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.

Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation

Procedia PDF Downloads 128
624 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate

Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori

Abstract:

Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.

Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission

Procedia PDF Downloads 75
623 Sweet to Bitter Perception Parageusia: Case of Posterior Inferior Cerebellar Artery Territory Diaschisis

Authors: I. S. Gandhi, D. N. Patel, M. Johnson, A. R. Hirsch

Abstract:

Although distortion of taste perception following a cerebrovascular event may seem to be a frivolous consequence of a classic stroke presentation, altered taste perception places patients at an increased risk for malnutrition, weight loss, and depression, all of which negatively impact the quality of life. Impaired taste perception can result from a wide variety of cerebrovascular lesions to various locations, including pons, insular cortices, and ventral posteromedial nucleus of the thalamus. Wallenberg syndrome, also known as a lateral medullary syndrome, has been described to impact taste; however, specific sweet to bitter taste dysgeusia from a territory infarction is an infrequent event; as such, a case is presented. One year prior to presentation, this 64-year-old right-handed woman, suffered a right posterior inferior cerebellar artery aneurysm rupture with resultant infarction, culminating in a ventriculoperitoneal shunt placement. One and half months after this event, she noticed the gradual onset of lack of ability to taste sweet, to eventually all sweet food tasting bitter. Since the onset of her chemosensory problems, the patient has lost 60-pounds. Upon gustatory testing, the patient's taste threshold showed ageusia to sucrose and hydrochloric acid, while normogeusia to sodium chloride, urea, and phenylthiocarbamide. The gustatory cortex is made in part by the right insular cortex as well as the right anterior operculum, which are primarily involved in the sensory taste modalities. In this model, sweet is localized in the posterior-most along with the rostral aspect of the right insular cortex, notably adjacent to the region responsible for bitter taste. The sweet to bitter dysgeusia in our patient suggests the presence of a lesion in this localization. Although the primary lesion in this patient was located in the right medulla of the brainstem, neurodegeneration in the rostal and posterior-most aspect, of the right insular cortex may have occurred due to diaschisis. Diaschisis has been described as neurophysiological changes that occur in remote regions to a focal brain lesion. Although hydrocephalus and vasospasm due to aneurysmal rupture may explain the distal foci of impairment, the gradual onset of dysgeusia is more indicative of diaschisis. The perception of sweet, now tasting bitter, suggests that in the absence of sweet taste reception, the intrinsic bitter taste of food is now being stimulated rather than sweet. In the evaluation and treatment of taste parageusia secondary to cerebrovascular injury, prophylactic neuroprotective measures may be worthwhile. Further investigation is warranted.

Keywords: diaschisis, dysgeusia, stroke, taste

Procedia PDF Downloads 180
622 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration

Authors: S. J. Addinell, T. Richard, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.

Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis

Procedia PDF Downloads 230
621 The Origins of Representations: Cognitive and Brain Development

Authors: Athanasios Raftopoulos

Abstract:

In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.

Keywords: mental representations, iconic representations, symbols, human evolution

Procedia PDF Downloads 57
620 Managing Expatriates' Return: Repatriation Practices in a Sample of Firms in Portugal

Authors: Ana Pinheiro, Fatima Suleman

Abstract:

Literature has revealed strong awareness of companies in regard of expatriation, but issues associated with repatriation of employees after an international assignment have been overlooked. Repatriation is one of the most challenging human resource practices that affect how companies benefit from acquired skills and high potential employees; and gain competitive advantage through network developed during expatriation. However, empirical evidence achieved so far suggests that expatriates have been disappointed because companies lack an effective repatriation strategy. Repatriates’ professional and emotional needs are often unrecognized, while repatriation is perceived as a non-issue by companies. The underlying assumption is that the return to parent company, and original country, culture and language does not demand for any particular support. Unfortunately, this basic view has non-negligible consequences on repatriates, especially on expatriate retention and turnover rates after expatriation. The goal of our study is to examine the specific policies and practices adopted by companies to support employees after an international assignment. We assume that expatriation is process which ends with repatriation. The latter is such a crucial issue as the expatriation and require due attention through appropriate design of human resource management policies and tools. For this purpose, we use data from a qualitative research based on interviews to a sample of firms operating in Portugal. We attempt to compare how firms accommodate the concerns with repatriation in their policies and practices. Therefore, the interviews collect data on both expatriation and repatriation process, namely the selection and skills of candidates to expatriation, training, mentoring, communication and pay policies. Portuguese labor market seems to be an interesting case study for mainly two reasons. On the one hand, Portuguese Government is encouraging companies to internationalize in the context of an external market-oriented growth model. On the other hand, expatriation is being perceived as a job opportunity in the context of high unemployment rates of both skilled and non-skilled. This is an ongoing research and the data collected until now indicate that companies follow the pattern described in the literature. The interviewed companies recognize the higher relevance of repatriation process than expatriation, but disregard specific human resource policies. They have perceived that unfavorable labor market conditions discourage mobility across companies. It should be stressed that companies underline that employees enhanced the relevance of stable jobs and attach far less importance to career development and other benefits after expatriation. However, there are still cases of turnover and difficulties of retention. Managers’ report non-negligible cases of turnover associated with lack of effective repatriation programs and non-recognition of good performance. Repatriates seem to having acquired entrepreneurial spirit and skills and often create their own company. These results suggest that even in the context of worsening labor market conditions, there should be greater awareness of the need to retain talents, experienced and highly skills employees. Ultimately, other companies poach invaluable assets, while internationalized companies risk being training providers.

Keywords: expatriates, expatriation, international management, repatriation

Procedia PDF Downloads 336
619 Empirical Analysis of the Relationship between Voluntary Accounting Disclosures and Mongolian Stock Exchange Listed Companies’ Characteristics

Authors: Ernest Nweke

Abstract:

Mongolia has made giant strides in the development of its auditing and accounting system from Soviet-style to a market-oriented system. High levels of domestic and foreign investment desired by the Mongolian government require that better and improved quality of corporate information and disclosure consistent with international standards be made available to investors. However, the Mongolian Certified Public Accountants (CPA) profession is still developing, and the quality of services provided by accounting firms in most cases do not comply with International Financial Reporting Standards (IFRS) framework approved by the government for use in financial reporting. Against this backdrop, Accounting and audit reforms, liberalization and deregulation, establishment of an efficient and effective professional monitoring and supervision regime are policy necessities. These will further enhance the Mongolian business environment, eliminate incompetence in the system, make the economy more attractive to investors and ultimately lift reporting standards and bring about improved accounting, auditing and disclosure practices among Mongolian firms. This paper examines the fundamental issues in the accounting and auditing environment in Mongolia and investigates the relationship between selected characteristics of Mongolian Stock Exchange (MSE) listed firms (profitability, leverage, firm size, firm auditor size, firm listing age, board size and proportion of independent directors) and voluntary accounting disclosures in their annual reports and accounts. The selected sample of firms for the research purpose consists of the top 20 indexes of the MSE, representing over 95% of the market capitalization. An empirical analysis of the hypothesized relationship was carried out using multiple regression in EViews analytical software. Research results lend credence to the fact that only a few of the company attributes positively impact voluntary accounting disclosures in Mongolian Stock Exchange-listed firms. The research is motivated by the absence of empirical evidence on the correlation between the quality of voluntary accounting disclosures made by listed companies in Mongolia and company characteristics and the findings thereof significantly useful to both firms and regulatory authorities. The concluding part of the paper precisely consists of useful research-based recommendations for listed firms and regulatory agencies on measures to put in place in order to enhance the quality of corporate financial reporting and disclosures in Mongolia.

Keywords: accounting, auditing, corporate disclosure, listed firms

Procedia PDF Downloads 103
618 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 227
617 Antimicrobial Activities of Lactic Acid Bacteria from Fermented Foods and Probiotic Products

Authors: Alec Chabwinja, Cannan Tawonezvi, Jerneja Vidmar, Constance Chingwaru, Walter Chingwaru

Abstract:

Objective: To evaluate the potential of commercial fermented / probiotic products available in Zimbabwe or internationally, and strains of Lactobacillus plantarum (L. plantarum) as prophylaxis and therapy against diarrhoeal and sexually transmitted infections. Methods: The antimicrobial potential of cultures of lactobacilli enriched from 4 Zimbabwean commercial food/beverage products, namely Dairibord Lacto sour milk (DLSM), Probrand sour milk (PSM), Kefalos Vuka cheese (KVC) and Chibuku opaque beer (COB); three probiotic products obtainable in Europe and internationally; and four strains of L. plantarum obtained from Balkan traditional cheeses and Zimbabwean foods against clinical strains of Escherichia coli (E. coli) and non-clinical strains of Candida albicans and Rhodotorula spp. was assayed using the well diffusion method. Three commercial Agar diffusion assay and a competitive exclusion assay were carried out on Mueller-Hinton agar. Results: Crude cultures of putative lactobacillus strains obtained from Zimbabwean dairy products (Probrand sour milk, Kefalos Vuka vuka cheese and Chibuku opaque beer) exhibited significantly greater antimicrobial activities against clinical strains of E. coli than strains of L. plantarum isolated from Balkan cheeses (CLP1, CLP2 or CLP3) or crude microbial cultures from commercial paediatric probiotic products (BG, PJ and PL) of a culture of Lactobacillus rhamnosus LGG (p < 0.05). Furthermore, the following has high antifungal activities against the two yeasts: supernatant-free microbial pellet (SFMP) from an extract of M. azedarach leaves (27mm ± 2.5) > cell-free culture supernatants (CFCS) from Maaz Dairy sour milk and Mnandi sour milk (approximately 26mm ± 1.8) > CFCS and SFMP from Amansi hodzeko (25mm ± 1.5) > CFCS from Parinari curatellifolia fruit (24mm ± 1.5), SFMP from P. curatellifolia fruit (24mm ± 1.4) and SFMP from mahewu (20mm ± 1.5). These cultures also showed high tolerance to acidic conditions (~pH4). Conclusions: The putative lactobacilli from four commercial Zimbabwean dairy products (Probrand sour milk, Kefalos Vuka vuka cheese and Chibuku opaque beer), and three strains of L. plantarum from Balkan cheeses (CLP1, CLP2 or CLP3) exhibited high antibacterial activities, while Maaz Dairy sour-, Mnandi sour- and Amansi hodzeko milk products had high antifungal activities. Our selection of Zimbabwean probiotic products has potential for further development into probiotic products for use in the control diarrhea caused by pathogenic strains of E. coli or yeast infections. Studies to characterise the probiotic potential of the live cultures in the products are underway.

Keywords: lactic acid bacteria, Staphylococcus aureus, Streptococcus spp, yeast, inhibition, acid tolerance

Procedia PDF Downloads 410
616 Clinicomycological Pattern of Superficial Fungal Infections among Primary School Children in Communities in Enugu, Nigeria

Authors: Nkeiruka Elsie Ezomike, Chinwe L. Onyekonwu, Anthony N. Ikefuna, Bede C. Ibe

Abstract:

Superficial fungal infections (SFIs) are one of the common cutaneous infections that affect children worldwide. They may lead to school absenteeism or school drop-out and hence setback in the education of the child. Community-based studies in any locality are good reflections of the health conditions within that area. There is a dearth of information in the literature about SFI among primary school children in Enugu. This study aimed to determine the clinicomycological pattern of SFIs among primary school children in rural and urban communities in Enugu. This was a comparative descriptive cross-sectional study among primary school children in Awgu (rural) and Enugu North (urban) Local Government Areas (LGAs). Subjects' selection was made over 6 months using a multi-stage sampling method. Information such as age, sex, parental education, and occupation were collected using questionnaires. Socioeconomic classes of the children were determined using the classification proposed by Oyedeji et al. The samples were collected from subjects with SFIs. Potassium hydroxide tests were done on the samples. The samples that tested positive were cultured for SFI by inoculating onto Sabouraud's dextrose chloramphenicol actidione agar. The characteristics of the isolates were identified according to their morphological features using Mycology Online, Atlas 2000, and Mycology Review 2003. Equal numbers of children were recruited from the two LGAs. A total of 1662 pupils were studied. The mean ages of the study subjects were 9.03 ± 2.10years in rural and 10.46 ± 2.33years in urban communities. The male to female ratio was 1.6:1 in rural and 1:1.1 in urban communities. The personal hygiene of the children was significantly related to the presence of SFIs. The overall prevalence of SFIs among the study participants was 45%. In the rural, the prevalence was 29.6%, and in the urban prevalence was 60.4%. The types of SFIs were tinea capitis (the commonest), tinea corporis, pityriasis Versicolor, tinea unguium, and tinea manuum with prevalence rates lower in rural than urban communities. The clinical patterns were gray patch and black dot type of non-inflammatory tinea capitis, kerion, tinea corporis with trunk and limb distributions, and pityriasis Versicolor with face, trunk and limb distributions. Gray patch was the most frequent pattern of SFI seen in rural and urban communities. Black dot type was more frequent in rural than urban communities. SFIs were frequent among children aged 5 to 8years in rural and 9 to 12 years in urban communities. SFIs were commoner in males in the rural, whereas female dominance was observed in the urban. SFIs were more in children from low social class and those with poor hygiene. Trichophyton tonsurans and Trichophyton soudanese were the common mycological isolates in rural and urban communities, respectively. In conclusion, SFIs were less prevalent in rural than in urban communities. Trichophyton species were the most common fungal isolates in the communities. Health education of mothers and their children on SFI and good personal hygiene will reduce the incidence of SFIs.

Keywords: clinicomycological pattern, communities, primary school children, superficial fungal infections

Procedia PDF Downloads 125
615 Printed Electronics for Enhanced Monitoring of Organ-on-Chip Culture Media Parameters

Authors: Alejandra Ben-Aissa, Martina Moreno, Luciano Sappia, Paul Lacharmoise, Ana Moya

Abstract:

Organ-on-Chip (OoC) stands out as a highly promising approach for drug testing, presenting a cost-effective and ethically superior alternative to conventional in vivo experiments. These cutting-edge devices emerge from the integration of tissue engineering and microfluidic technology, faithfully replicating the physiological conditions of targeted organs. Consequently, they offer a more precise understanding of drug responses without the ethical concerns associated with animal testing. When addressing the limitations of OoC due to conventional and time-consuming techniques, Lab-On-Chip (LoC) emerge as a disruptive technology capable of providing real-time monitoring without compromising sample integrity. This work develops LoC platforms that can be integrated within OoC platforms to monitor essential culture media parameters, including glucose, oxygen, and pH, facilitating the straightforward exchange of sensing units within a dynamic and controlled environment without disrupting cultures. This approach preserves the experimental setup, minimizes the impact on cells, and enables efficient, prolonged measurement. The LoC system is fabricated following the patented methodology protected by EU patent EP4317957A1. One of the key challenges of integrating sensors in a biocompatible, feasible, robust, and scalable manner is addressed through fully printed sensors, ensuring a customized, cost-effective, and scalable solution. With this technique, sensor reliability is enhanced, providing high sensitivity and selectivity for accurate parameter monitoring. In the present study, LoC is validated measuring a complete culture media. The oxygen sensor provided a measurement range from 0 mgO2/L to 6.3 mgO2/L. The pH sensor demonstrated a measurement range spanning 2 pH units to 9.5 pH units. Additionally, the glucose sensor achieved a measurement range from 0 mM to 11 mM. All the measures were performed with the sensors integrated in the LoC. In conclusion, this study showcases the impactful synergy of OoC technology with LoC systems using fully printed sensors, marking a significant step forward in ethical and effective biomedical research, particularly in drug development. This innovation not only meets current demands but also lays the groundwork for future advancements in precision and customization within scientific exploration.

Keywords: organ on chip, lab on chip, real time monitoring, biosensors

Procedia PDF Downloads 17
614 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 230
613 Making Unorganized Social Groups Responsible for Climate Change: Structural Analysis

Authors: Vojtěch Svěrák

Abstract:

Climate change ethics have recently shifted away from individualistic paradigms towards concepts of shared or collective responsibility. Despite this evolving trend, a noticeable gap remains: a lack of research exclusively addressing the moral responsibility of specific unorganized social groups. The primary objective of the article is to fill this gap. The article employs the structuralist methodological approach proposed by some feminist philosophers, utilizing structural analysis to explain the existence of social groups. The argument is made for the integration of this framework with the so-called forward-looking Social Connection Model (SCM) of responsibility, which ascribes responsibilities to individuals based on their participation in social structures. The article offers an extension of this model to justify the responsibility of unorganized social groups. The major finding of the study is that although members of unorganized groups are loosely connected, collectively they instantiate specific external social structures, share social positioning, and the notion of responsibility could be based on that. Specifically, if the structure produces harm or perpetuates injustices, and the group both benefits from and possesses the capacity to significantly influence the structure, a greater degree of responsibility should be attributed to the group as a whole. This thesis is applied and justified within the context of climate change, based on the asymmetrical positioning of different social groups. Climate change creates a triple inequality: in contribution, vulnerability, and mitigation. The study posits that different degrees of group responsibility could be drawn from these inequalities. Two social groups serve as a case study for the article: first, the Pakistan lower class, consisting of people living below the national poverty line, with a low greenhouse gas emissions rate, severe climate change-related vulnerability due to the lack of adaptation measures, and with very limited options to participate in the mitigation of climate change. Second, the so-called polluter elite, defined by members' investments in polluting companies and high-carbon lifestyles, thus with an interest in the continuation of structures leading to climate change. The first identified group cannot be held responsible for climate change, but their group interest lies in structural change and should be collectively maintained. On the other hand, the responsibility of the second identified group is significant and can be fulfilled by a justified demand for some political changes. The proposed approach of group responsibility is suggested to help navigate climate justice discourse and environmental policies, thus helping with the sustainability transition.

Keywords: collective responsibility, climate justice, climate change ethics, group responsibility, social ontology, structural analysis

Procedia PDF Downloads 60
612 Effects of School Culture and Curriculum on Gifted Adolescent Moral, Social, and Emotional Development: A Longitudinal Study of Urban Charter Gifted and Talented Programs

Authors: Rebekah Granger Ellis, Pat J. Austin, Marc P. Bonis, Richard B. Speaker, Jr.

Abstract:

Using two psychometric instruments, this study examined social and emotional intelligence and moral judgment levels of more than 300 gifted and talented high school students enrolled in arts-integrated, academic acceleration, and creative arts charter schools in an ethnically diverse large city in the southeastern United States. Gifted and talented individuals possess distinguishable characteristics; these frequently appear as strengths, but often serious problems accompany them. Although many gifted adolescents thrive in their environments, some struggle in their school and community due to emotional intensity, motivation and achievement issues, lack of peers and isolation, identification problems, sensitivity to expectations and feelings, perfectionism, and other difficulties. These gifted students endure and survive in school rather than flourish. Gifted adolescents face special intrapersonal, interpersonal, and environmental problems. Furthermore, they experience greater levels of stress, disaffection, and isolation than non-gifted individuals due to their advanced cognitive abilities. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of these adolescents. Numerous studies have researched moral, social, and emotional development in the areas of cognitive-developmental, psychoanalytic, and behavioral learning; however, in almost all cases, these three facets have been studied separately leading to many divergent theories. Additionally, various frameworks and models purporting to encourage the different socio-affective branches of development have been debated in curriculum theory, yet research is inconclusive on the effectiveness of these programs. Most often studied is the socio-affective domain, which includes development and regulation of emotions; empathy development; interpersonal relations and social behaviors; personal and gender identity construction; and moral development, thinking, and judgment. Examining development in these domains can provide insight into why some gifted and talented adolescents are not always successful in adulthood despite advanced IQ scores. Particularly whether emotional, social and moral capabilities of gifted and talented individuals are as advanced as their intellectual abilities and how these are related to each other. This mixed methods longitudinal study examined students in urban gifted and talented charter schools for (1) socio-affective development levels and (2) whether a particular environment encourages developmental growth. Research questions guiding the study: (1) How do academically and artistically gifted 10th and 11th grade students perform on psychological scales of social and emotional intelligence and moral judgment? Do they differ from the normative sample? Do gender differences exist among gifted students? (2) Do adolescents who attend distinctive gifted charter schools differ in developmental profiles? Students’ performances on psychometric instruments were compared over time and by program type. Assessing moral judgment (DIT-2) and socio-emotional intelligence (BarOn EQ-I: YV), participants took pre-, mid-, and post-tests during one academic school year. Quantitative differences in growth on these psychological scales (individuals and school-wide) were examined. If a school showed change, qualitative artifacts (culture, curricula, instructional methodology, stakeholder interviews) provided insight for environmental correlation.

Keywords: gifted and talented programs, moral judgment, social and emotional intelligence, socio-affective education

Procedia PDF Downloads 192
611 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 131
610 Enzymatic Determination of Limonene in Red Clover Genotypes

Authors: Andrés Quiroz, Emilio Hormazabal, Ana Mutis, Fernando Ortega, Manuel Chacón-Fuentes, Leonardo Parra

Abstract:

Red clover (Trifolium pratense L.) is an important forage species in temperate regions of the world. The main limitation of this species worldwide is a lack of persistence related to the high mortality of plants due to a complex of biotic and abiotic factors, determining a life span of two or three seasons. Because of the importance of red clover in Chile, a red clover breeding program was started at INIA Carillanca Research Center in 1989, with the main objective of improving the survival of plants, forage yield, and persistence. The main selection criteria for selecting new varieties have been based on agronomical parameters and biotic factors. The main biotic factor associated with red clover mortality in Chile is Hylastinus obscurus (Coleoptera: Curculionidae). Both larval and adults feed on the roots, causing weakening and subsequent death of clover plants. Pesticides have not been successful for controlling infestations of this root borer. Therefore, alternative strategies for controlling this pest are a high priority for red clover producers. Currently, the role of semiochemical in the interaction between H. obscurus and red clover plants has been widely studied for our group. Specifically, from the red clover foliage has been identified limonene is eliciting repellency from the root borer. Limonene is generated in the plant from two independent biosynthetic pathways, the mevalonic acid, and deoxyxylulose pathway. Mevalonate pathway enzymes are localized in the cytosol, whereas the deoxyxylulose phosphate pathway enzymes are found in plastids. In summary, limonene can be determinated by enzymatic bioassay using GPP as substrate and by limonene synthase expression. Therefore, the main objective of this work was to study genetic variation of limonene in material provided by INIA´s Red Clover breeding program. Protein extraction was carried out homogenizing 250 mg of leave tissue and suspended in 6 mL of extraction buffer (PEG 1500, PVP-30, 20 mM MgCl2 and antioxidants) and stirred on ice for 20 min. After centrifugation, aliquots of 2.5 mL were desalted on PD-10 columns, resulting in a final volume of 3.5 mL. Protein determination was performed according to Bradford with BSA as a standard. Monoterpene synthase assays were performed with 50 µL of protein extracts transferred into gas-tight 2 mL crimp seal vials after addition of 4 µL MgCl₂ and 41 µL assay buffer. The assay was started by adding 5 µL of a GPP solution. The mixture was incubated for 30 min at 40 °C. Biosynthesized limonene was quantified in a GC equipped with a chiral column and using synthetic R and S-limonene standards. The enzymatic the production of R and S-limonene from different Superqueli-Carillanca genotypes is shown in this work. Preliminary results showed significant differences in limonene content among the genotypes analyzed. These results constitute an important base for selecting genotypes with a high content of this repellent monoterpene towards H. obscurus.

Keywords: head space, limonene enzymatic determination, red clover, Hylastinus obscurus

Procedia PDF Downloads 266
609 Cai Guo-Qiang: A Chinese Artist at the Cutting-Edge of Global Art

Authors: Marta Blavia

Abstract:

Magiciens de la terre, organized in 1989 by the Centre Pompidou, became 'the first worldwide exhibition of contemporary art' by presenting artists from Western and non-Western countries, including three Chinese artists. For the first time, West turned its eyes to other countries not as exotic sources of inspiration, but as places where contemporary art was also being created. One year later, Chine: demain pour hier was inaugurated as the first Chinese avant-garde group-exhibition in Occident. Among the artists included was Cai Guo-Qiang who, like many other Chinese artists, had left his home country in the eighties in pursuit of greater creative freedom. By exploring artistic non-Western perspectives, both landmark exhibitions questioned the predominance of the Eurocentric vision in the construction of history art. But more than anything else, these exhibitions laid the groundwork for the rise of the so-called phenomenon 'global contemporary art'. All the same time, 1989 also was a turning point in Chinese art history. Because of the Tiananmen student protests, The Chinese government undertook a series of measures to cut down any kind of avant-garde artistic activity after a decade of a relative openness. During the eighties, and especially after the Tiananmen crackdown, some important artists began to leave China to move overseas such as Xu Bing and Ai Weiwei (USA); Chen Zhen and Huang Yong Ping (France); or Cai Guo-Qiang (Japan). After emigrating abroad, Chinese overseas artists began to develop projects in accordance with their new environments and audiences as well as to appear in numerous international exhibitions. With their creations, that moved freely between a variety of Eastern and Western art sources, these artists were crucial agents in the emergence of global contemporary art. As other Chinese artists overseas, Cai Guo-Qiang’s career took off during the 1990s and early 2000s right at the same moment in which Western art world started to look beyond itself. Little by little, he developed a very personal artistic language that redefines Chinese ideas, symbols, and traditional materials in a new world order marked by globalization. Cai Guo-Qiang participated in many of the exhibitions that contributed to shape global contemporary art: Encountering the Others (1992); the 45th Venice Biennale (1993); Inside Out: New Chinese Art (1997), or the 48th Venice Biennale (1999), where he recreated the Chinese monumental social realist work Rent Collection Courtyard that earned him the Golden Lion Award. By examining the different stages of Cai Guo-Qiang’s artistic path as well as the transnational dimensions of his creations, this paper aims at offering a comprehensive survey on the construction of the discourse of global contemporary art.

Keywords: Cai Guo-Qiang, Chinese artists overseas, emergence global art, transnational art

Procedia PDF Downloads 284
608 Khiaban (the Street) as an Ancient Percept of the Iranian Urban Landscape: An Aesthetic Reading of Lalehzar Street, the First Modern Khiaban in Iran

Authors: Mohammad Atashinbar

Abstract:

Lalehzar was one of the main streets in central Tehran in late Qajar and 1st Pahlavi (1880-1940) and a center of attention for the government. It was a natural walk during the last decade of the reign of Nasser al-Din Shah (1880-1895). However, this street lost its prosperity status under the 2nd Pahlavi and evolved from a modern cultural street to a commercial corridor. Lalehzar's decline was the result of the immigration of the upper class from the inner city to the northern part and the consequent transfer of amenities and luxury goods with them. It seems that during Lalehzar's six decades of prosperity, this khiâbân has received an aesthetic look, which has made it enjoyable and appreciated by Tehran’s people. Various post-revolutionary urban management measures have been taken to revive Lalehzar and improve the quality of its urban life. Since the beginning of the Safavid era, the khiâbân was accompanied by the concept of urban space, and its characteristics are explained by referring to the main axis of the Persian Garden with rows of trees, streams, and a line of flowers on both sides. The construction of a street inside the city as an urban space benefits from a mental concept as a spiritual and exciting space, especially in common forms in the Persian Garden. Before that, the khiâbân was a religious and mythical concept, and we can even say that the mastery of this concept led to its appearance in the garden. In Tehran, Lalehzar Street is a gateway to modernity. The aesthetic changes in Lalehzar Street, inspired by Nasser al-Din Shah's journey to Europe around 1870, coinciding with the changes in architectural and urban landscape movements around the world between 1880 and 1940. The Shah is impressed by the modernist urbanism and, in particular, the Champs-Élysées in Paris. A tree-lined promenade with the hallmarks of the Persian Garden is familiar to Nasser al-Din Shah's mental image of beauty. In its state of mind, the main axis of the Persian Garden has the characteristics of a promenade. Therefore, the origins of the aesthetic of Lalehzar Street come from the aesthetics of the khiâbân. Admitting that the Champs-Élysées served as a model for Lalehzar, it seems that the Shah wanted to associate the Champs-Élysées with Lalehzar and highlight its landscape aspects by building this street. Depending on whether the percepts have their own aesthetic, this proposal seeks to analyze the aesthetic evolutions of the khiâbân as a percept towards the street as a component of the urban landscape in Lalehzar. The research attempts to review the aesthetic aspects of Lalehzar between 1880-1940 by using iconographic analysis, based on the available historical data, to find the leading aesthetics principles of this street. The aesthetic view to Lalehzar as an artwork is one of the main achievements of this study.

Keywords: Lalehzar, aesthetics, percept, Tehran, street

Procedia PDF Downloads 151
607 Pre- and Post-Brexit Experiences of the Bulgarian Working Class Migrants: Qualitative and Quantitative Approaches

Authors: Mariyan Tomov

Abstract:

Bulgarian working class immigrants are increasingly concerned with UK’s recent immigration policies in the context of Brexit. The new ID system would exclude many people currently working in Britain and would break the usual immigrant travel patterns. Post-Brexit Britain would aim to repeal seasonal immigrants. Measures for keeping long-term and life-long immigrants have been implemented and migrants that aim to remain in Britain and establish a household there would be more privileged than temporary or seasonal workers. The results of such regulating mechanisms come at the expense of migrants’ longings for a ‘normal’ existence, especially for those coming from Central and Eastern Europe. Based on in-depth interviews with Bulgarian working class immigrants, the study found out that their major concerns following the decision of the UK to leave the EU are related with the freedom to travel, reside and work in the UK. Furthermore, many of the interviewed women are concerned that they could lose some of the EU's fundamental rights, such as maternity and protection of pregnant women from unlawful dismissal. The soar of commodity prices and university fees and the limited access to public services, healthcare and social benefits in the UK, are also subject to discussion in the paper. The most serious problem, according to the interview, is that the attitude towards Bulgarians and other immigrants in the UK is deteriorating. Both traditional and social media in the UK often portray the migrants negatively by claiming that they take British job positions while simultaneously abuse the welfare system. As a result, the Bulgarian migrants often face social exclusion, which might have negative influence on their health and welfare. In this sense, some of the interviewed stress on the fact that the most important changes after Brexit must take place in British society itself. The aim of the proposed study is to provide a better understanding of the Bulgarian migrants’ economic, health and sociocultural experience in the context of Brexit. Methodologically, the proposed paper leans on: 1. Analysing ethnographic materials dedicated to the pre- and post-migratory experiences of Bulgarian working class migrants, using SPSS. 2. Semi-structured interviews are conducted with more than 50 Bulgarian working class migrants [N > 50] in the UK, between 18 and 65 years. The communication with the interviewees was possible via Viber/Skype or face-to-face interaction. 3. The analysis is guided by theoretical frameworks. The paper has been developed within the framework of the research projects of the National Scientific Fund of Bulgaria: DCOST 01/25-20.02.2017 supporting COST Action CA16111 ‘International Ethnic and Immigrant Minorities Survey Data Network’.

Keywords: Bulgarian migrants in UK, economic experiences, sociocultural experiences, Brexit

Procedia PDF Downloads 127
606 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 143
605 Efficacy and Safety of Computerized Cognitive Training Combined with SSRIs for Treating Cognitive Impairment Among Patients with Late-Life Depression: A 12-Week, Randomized Controlled Study

Authors: Xiao Wang, Qinge Zhang

Abstract:

Background: This randomized, open-label study examined the therapeutic effects of computerized cognitive training (CCT) combined with selective serotonin reuptake inhibitors (SSRIs) on cognitive impairment among patients with late-life depression (LLD). Method: Study data were collected from May 5, 2021, to April 21, 2023. Outpatients who met diagnostic criteria for major depressive disorder according to the fifth revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) criteria (i.e., a total score on the 17-item Hamilton Depression Rating Scale (HAMD-17) ≥ 18 and a total score on the Montreal Cognitive Assessment scale (MOCA) <26) were randomly assigned to receive up to 12 weeks of CCT and SSRIs treatment (n=57) or SSRIs and Control treatment (n=61). The primary outcome was the change in Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) scores from baseline to week 12 between the two groups. The secondary outcomes included changes in the HAMD-17 score, Hamilton Anxiety Scale (HAMA) score and Neuropsychiatric Inventory (NPI) score. Mixed model repeated measures (MMRM) analysis was performed on modified intention-to-treat (mITT) and completer populations. Results: The full analysis set (FAS) included 118 patients (CCT and SSRIs group, n=57; SSRIs and Control group, n =61). Over the 12-week study period, the reduction in the ADAS-cog total score was significant (P < 0.001) in both groups, while MMRM analysis revealed a significantly greater reduction in cognitive function (ADAS-cog total scores) from baseline to posttreatment in the CCT and SSRIs group than in the SSRI and Control group [(F (1,115) =13.65, least-squares mean difference [95% CI]: −2.77 [−3.73, −1.81], p<0.001)]. There were significantly greater improvements in depression symptoms (measured by the HAMD-17) in the CCT and SSRIs group than in the control group [MMRM, estimated mean difference (SE) between groups −3.59 [−5.02, −2.15], p < 0.001]. The least-squares mean changes in the HAMA scores and NPI scores between baseline and week 8 were greater in the CCT and SSRIs group than in the control group (all P < 0.05). There was no significant difference between groups on response rates and remission rates by using the last-observation-carried-forward (LOCF) method (all P > 0.05). The most frequent adverse events (AEs) in both groups were dry mouth, somnolence, and constipation. There was no significant difference in the incidence of adverse events between the two groups. Conclusions: CCT combined with SSRIs was efficacious and well tolerated in LLD patients with cognitive impairment.

Keywords: late-life depression, cognitive function, computerized cognitive training, SSRIs

Procedia PDF Downloads 51
604 The Potential for Maritime Tourism: An African Perspective

Authors: Lynn C. Jonas

Abstract:

The African continent is rich in coastal history, heritage, and culture, presenting immense potential for the development of maritime tourism. Shipping and its related components are generally associated with the maritime industry, and tourism’s link is to the various forms of nautical tourism. Activities may include cruising, yachting, visits to lighthouses, ports, harbors, and excursions to related sites of cultural, historical, or ecological significance. There have been hundreds of years of explorers leaving a string of shipwrecks along the various coastal areas on the continent in their pursuit of establishing trade routes between Europe, Africa, and the Far East. These shipwrecks present diving opportunities in artificial reefs and marine heritage to be explored in various ways in the maritime cultural zones. Along the South African coast, for example, six Portuguese shipwrecks highlight the Bartolomeu Dias legacy of exploration, and there are a number of warships in Tanzanian waters. Furthermore, decades of African countries being under colonized rule have left the continent with an intricate cultural heritage that is enmeshed in European language architecture interlinked with, in many instances, hard-fought independent littoral states. There is potential for coastal trails to be developed to follow these historical events as, at one point in history, France had colonized 35 African states, and subsequently, 32 African states were colonized by Britain. Countries such as Cameroon still have the legacy of Francophone versus Anglophone as a result of this shift in colonizers. Further to the colonized history of the African continent, there is an uncomfortable heritage of the slave trade history. To a certain extent, these coastal slave trade posts are being considered attractive to a niche tourism audience; however, there is potential for education and interpretive measures to grow this as a tourism product. Notwithstanding these potential opportunities, there are numerous challenges to consider, such as poor maritime infrastructure, maritime security concerns with issues such as piracy, transnational crimes including weapons and migrant smuggling, drug, and human trafficking. These and related maritime issues contribute to the concerns over the porous nature of African ocean gateways, adding to the security concerns for tourists. This theoretical paper will consider these trends and how they may contribute to the growth and development of maritime tourism on the African continent. African considerations of the growth potential of tourism in coastal and marine spaces are needed, particularly with a focus on embracing the continent's tumultuous past as part of its heritage. This has the potential to contribute to the creation of a sense of ownership of opportunities.

Keywords: coastal trade routes, maritime tourism, shipwrecks, slave trade routes

Procedia PDF Downloads 19
603 Deep Mill Level Zone (DMLZ) of Ertsberg East Skarn System, Papua; Correlation between Structure and Mineralization to Determined Characteristic Orebody of DMLZ Mine

Authors: Bambang Antoro, Lasito Soebari, Geoffrey de Jong, Fernandy Meiriyanto, Michael Siahaan, Eko Wibowo, Pormando Silalahi, Ruswanto, Adi Budirumantyo

Abstract:

The Ertsberg East Skarn System (EESS) is located in the Ertsberg Mining District, Papua, Indonesia. EESS is a sub-vertical zone of copper-gold mineralization hosted in both diorite (vein-style mineralization) and skarn (disseminated and vein style mineralization). Deep Mill Level Zone (DMLZ) is a mining zone in the lower part of East Ertsberg Skarn System (EESS) that product copper and gold. The Deep Mill Level Zone deposit is located below the Deep Ore Zone deposit between the 3125m to 2590m elevation, measures roughly 1,200m in length and is between 350 and 500m in width. DMLZ planned start mined on Q2-2015, being mined at an ore extraction rate about 60,000 tpd by the block cave mine method (the block cave contain 516 Mt). Mineralization and associated hydrothermal alteration in the DMLZ is hosted and enclosed by a large stock (The Main Ertsberg Intrusion) that is barren on all sides and above the DMLZ. Late porphyry dikes that cut through the Main Ertsberg Intrusion are spatially associated with the center of the DMLZ hydrothermal system. DMLZ orebody hosted in diorite and skarn, both dominantly by vein style mineralization. Percentage Material Mined at DMLZ compare with current Reserves are diorite 46% (with 0.46% Cu; 0.56 ppm Au; and 0.83% EqCu); Skarn is 39% (with 1.4% Cu; 0.95 ppm Au; and 2.05% EqCu); Hornfels is 8% (with 0.84% Cu; 0.82 ppm Au; and 1.39% EqCu); and Marble 7 % possible mined waste. Correlation between Ertsberg intrusion, major structure, and vein style mineralization is important to determine characteristic orebody in DMLZ Mine. Generally Deep Mill Level Zone has 2 type of vein filling mineralization from both hosted (diorite and skarn), in diorite hosted the vein system filled by chalcopyrite-bornite-quartz and pyrite, in skarn hosted the vein filled by chalcopyrite-bornite-pyrite and magnetite without quartz. Based on orientation the stockwork vein at diorite hosted and shallow vein in skarn hosted was generally NW-SE trending and NE-SW trending with shallow-moderate dipping. Deep Mill Level Zone control by two main major faults, geologist founded and verified local structure between major structure with NW-SE trending and NE-SW trending with characteristics slickenside, shearing, gauge, water-gas channel, and some has been re-healed.

Keywords: copper-gold, DMLZ, skarn, structure

Procedia PDF Downloads 501