Search results for: distributed frequent itemset mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3985

Search results for: distributed frequent itemset mining

55 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 175
54 Concept Mapping to Reach Consensus on an Antibiotic Smart Use Strategy Model to Promote and Support Appropriate Antibiotic Prescribing in a Hospital, Thailand

Authors: Phenphak Horadee, Rodchares Hanrinth, Saithip Suttiruksa

Abstract:

Inappropriate use of antibiotics has happened in several hospitals, Thailand. Drug use evaluation (DUE) is one strategy to overcome this difficulty. However, most community hospitals still encounter incomplete evaluation resulting overuse of antibiotics with high cost. Consequently, drug-resistant bacteria have been rising due to inappropriate antibiotic use. The aim of this study was to involve stakeholders in conceptualizing, developing, and prioritizing a feasible intervention strategy to promote and support appropriate antibiotic prescribing in a community hospital, Thailand. Study antibiotics included four antibiotics such as Meropenem, Piperacillin/tazobactam, Amoxicillin/clavulanic acid, and Vancomycin. The study was conducted for the 1-year period between March 1, 2018, and March 31, 2019, in a community hospital in the northeastern part of Thailand. Concept mapping was used in a purposive sample, including doctors (one was an administrator), pharmacists, and nurses who involving drug use evaluation of antibiotics. In-depth interviews for each participant and survey research were conducted to seek the problems for inappropriate use of antibiotics based on drug use evaluation system. Seventy-seven percent of DUE reported appropriate antibiotic prescribing, which still did not reach the goal of 80 percent appropriateness. Meropenem led other antibiotics for inappropriate prescribing. The causes of the unsuccessful DUE program were classified into three themes such as personnel, lack of public relation and communication, and unsupported policy and impractical regulations. During the first meeting, stakeholders (n = 21) expressed the generation of interventions. During the second meeting, participants who were almost the same group of people in the first meeting (n = 21) were requested to independently rate the feasibility and importance of each idea and to categorize them into relevant clusters to facilitate multidimensional scaling and hierarchical cluster analysis. The outputs of analysis included the idealist, cluster list, point map, point rating map, cluster map, and cluster rating map. All of these were distributed to participants (n = 21) during the third meeting to reach consensus on an intervention model. The final proposed intervention strategy included 29 feasible and crucial interventions in seven clusters: development of information technology system, establishing policy and taking it into the action plan, proactive public relations of the policy, action plan and workflow, in cooperation of multidisciplinary teams in drug use evaluation, work review and evaluation with performance reporting, promoting and developing professional and clinical skill for staff with training programs, and developing practical drug use evaluation guideline for antibiotics. These interventions are relevant and fit to several intervention strategies for antibiotic stewardship program in many international organizations such as participation of the multidisciplinary team, developing information technology to support antibiotic smart use, and communication. These interventions were prioritized for implementation over a 1-year period. Once the possibility of each activity or plan is set up, the proposed program could be applied and integrated into hospital policy after evaluating plans. Effectiveness of each intervention could be promoted to other community hospitals to promote and support antibiotic smart use.

Keywords: antibiotic, concept mapping, drug use evaluation, multidisciplinary teams

Procedia PDF Downloads 118
53 Possible Involvement of DNA-methyltransferase and Histone Deacetylase in the Regulation of Virulence Potential of Acanthamoeba castellanii

Authors: Yi H. Wong, Li L. Chan, Chee O. Leong, Stephen Ambu, Joon W. Mak, Priyadashi S. Sahu

Abstract:

Background: Acanthamoeba is a free-living opportunistic protist which is ubiquitously distributed in the environment. Virulent Acanthamoeba can cause fatal encephalitis in immunocompromised patients and potential blinding keratitis in immunocompetent contact lens wearers. Approximately 24 species have been identified but only the A. castellanii, A. polyphaga and A. culbertsoni are commonly associated with human infections. Until to date, the precise molecular basis for Acanthamoeba pathogenesis remains unclear. Previous studies reported that Acanthamoeba virulence can be diminished through prolonged axenic culture but revived through serial mouse passages. As no clear explanation on this reversible pathogenesis is established, hereby, we postulate that the epigenetic regulators, DNA-methyltransferases (DNMT) and histone-deacetylases (HDAC), could possibly be involved in granting the virulence plasticity of Acanthamoeba spp. Methods: Four rounds of mouse passages were conducted to revive the virulence potential of the virulence-attenuated Acanthamoeba castellanii strain (ATCC 50492). Briefly, each mouse (n=6/group) was inoculated intraperitoneally with Acanthamoebae cells (2x 105 trophozoites/mouse) and incubated for 2 months. Acanthamoebae cells were isolated from infected mouse organs by culture method and subjected to subsequent mouse passage. In vitro cytopathic, encystment and gelatinolytic assays were conducted to evaluate the virulence characteristics of Acanthamoebae isolates for each passage. PCR primers which targeted on the 2 members (DNMT1 and DNMT2) and 5 members (HDAC1 to 5) of the DNMT and HDAC gene families respectively were custom designed. Quantitative real-time PCR (qPCR) was performed to detect and quantify the relative expression of the two gene families in each Acanthamoeba isolates. Beta-tubulin of A. castellanii (Genbank accession no: XP_004353728) was included as housekeeping gene for data normalisation. PCR mixtures were also analyzed by electrophoresis for amplicons detection. All statistical analyses were performed using the paired one-tailed Student’s t test. Results: Our pathogenicity tests showed that the virulence-reactivated Acanthamoeba had a higher degree of cytopathic effect on vero cells, a better resistance to encystment challenge and a higher gelatinolytic activity which was catalysed by serine protease. qPCR assay showed that DNMT1 expression was significantly higher in the virulence-reactivated compared to the virulence-attenuated Acanthamoeba strain (p ≤ 0.01). The specificity of primers which targeted on DNMT1 was confirmed by sequence analysis of PCR amplicons, which showed a 97% similarity to the published DNA-methyltransferase gene of A. castellanii (GenBank accession no: XM_004332804.1). Out of the five primer pairs which targeted on the HDAC family genes, only HDAC4 expression was significantly difference between the two variant strains. In contrast to DNMT1, HDAC4 expression was much higher in the virulence-attenuated Acanthamoeba strain. Conclusion: Our mouse passages had successfully restored the virulence of the attenuated strain. Our findings suggested that DNA-methyltransferase (DNMT1) and histone deacetylase (HDAC4) expressions are associated with virulence potential of Acanthamoeba spp.

Keywords: acanthamoeba, DNA-methyltransferase, histone deacetylase, virulence-associated proteins

Procedia PDF Downloads 287
52 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality

Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan

Abstract:

Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.

Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application

Procedia PDF Downloads 72
51 Structural Behavior of Subsoil Depending on Constitutive Model in Calculation Model of Pavement Structure-Subsoil System

Authors: M. Kadela

Abstract:

The load caused by the traffic movement should be transferred in the road constructions in a harmless way to the pavement as follows: − on the stiff upper layers of the structure (e.g. layers of asphalt: abrading and binding), and − through the layers of principal and secondary substructure, − on the subsoil, directly or through an improved subsoil layer. Reliable description of the interaction proceeding in a system “road construction – subsoil” should be in such case one of the basic requirements of the assessment of the size of internal forces of structure and its durability. Analyses of road constructions are based on: − elements of mechanics, which allows to create computational models, and − results of the experiments included in the criteria of fatigue life analyses. Above approach is a fundamental feature of commonly used mechanistic methods. They allow to use in the conducted evaluations of the fatigue life of structures arbitrarily complex numerical computational models. Considering the work of the system “road construction – subsoil”, it is commonly accepted that, as a result of repetitive loads on the subsoil under pavement, the growth of relatively small deformation in the initial phase is recognized, then this increase disappears, and the deformation takes the character completely reversible. The reliability of calculation model is combined with appropriate use (for a given type of analysis) of constitutive relationships. Phenomena occurring in the initial stage of the system “road construction – subsoil” is unfortunately difficult to interpret in the modeling process. The classic interpretation of the behavior of the material in the elastic-plastic model (e-p) is that elastic phase of the work (e) is undergoing to phase (e-p) by increasing the load (or growth of deformation in the damaging structure). The paper presents the essence of the calibration process of cooperating subsystem in the calculation model of the system “road construction – subsoil”, created for the mechanistic analysis. Calibration process was directed to show the impact of applied constitutive models on its deformation and stress response. The proper comparative base for assessing the reliability of created. This work was supported by the on-going research project “Stabilization of weak soil by application of layer of foamed concrete used in contact with subsoil” (LIDER/022/537/L-4/NCBR/2013) financed by The National Centre for Research and Development within the LIDER Programme. M. Kadela is with the Department of Building Construction Elements and Building Structures on Mining Areas, Building Research Institute, Silesian Branch, Katowice, Poland (phone: +48 32 730 29 47; fax: +48 32 730 25 22; e-mail: m.kadela@ itb.pl). models should be, however, the actual, monitored system “road construction – subsoil”. The paper presents too behavior of subsoil under cyclic load transmitted by pavement layers. The response of subsoil to cyclic load is recorded in situ by the observation system (sensors) installed on the testing ground prepared for this purpose, being a part of the test road near Katowice, in Poland. A different behavior of the homogeneous subsoil under pavement is observed for different seasons of the year, when pavement construction works as a flexible structure in summer, and as a rigid plate in winter. Albeit the observed character of subsoil response is the same regardless of the applied load and area values, this response can be divided into: - zone of indirect action of the applied load; this zone extends to the depth of 1,0 m under the pavement, - zone of a small strain, extending to about 2,0 m.

Keywords: road structure, constitutive model, calculation model, pavement, soil, FEA, response of soil, monitored system

Procedia PDF Downloads 353
50 Teachers Engagement to Teaching: Exploring Australian Teachers’ Attribute Constructs of Resilience, Adaptability, Commitment, Self/Collective Efficacy Beliefs

Authors: Lynn Sheridan, Dennis Alonzo, Hoa Nguyen, Andy Gao, Tracy Durksen

Abstract:

Disruptions to teaching (e.g., COVID-related) have increased work demands for teachers. There is an opportunity for research to explore evidence-informed steps to support teachers. Collective evidence informs data on teachers’ personal attributes (e.g., self-efficacy beliefs) in the workplace are seen to promote success in teaching and support teacher engagement. Teacher engagement plays a role in students’ learning and teachers’ effectiveness. Engaged teachers are better at overcoming work-related stress, burnout and are more likely to take on active roles. Teachers’ commitment is influenced by a host of personal (e.g., teacher well-being) and environmental factors (e.g., job stresses). The job demands-resources model provided a conceptual basis for examining how teachers’ well-being, and is influenced by job demands and job resources. Job demands potentially evoke strain and exceed the employee’s capability to adapt. Job resources entail what the job offers to individual teachers (e.g., organisational support), helping to reduce job demands. The application of the job demands-resources model involves gathering an evidence-base of and connection to personal attributes (job resources). The study explored the association between constructs (resilience, adaptability, commitment, self/collective efficacy) and a teacher’s engagement with the job. The paper sought to elaborate on the model and determine the associations between key constructs of well-being (resilience, adaptability), commitment, and motivation (self and collective-efficacy beliefs) to teachers’ engagement in teaching. Data collection involved online a multi-dimensional instrument using validated items distributed from 2020-2022. The instrument was designed to identify construct relationships. The participant number was 170. Data Analysis: The reliability coefficients, means, standard deviations, skewness, and kurtosis statistics for the six variables were completed. All scales have good reliability coefficients (.72-.96). A confirmatory factor analysis (CFA) and structural equation model (SEM) were performed to provide measurement support and to obtain latent correlations among factors. The final analysis was performed using structural equation modelling. Several fit indices were used to evaluate the model fit, including chi-square statistics and root mean square error of approximation. The CFA and SEM analysis was performed. The correlations of constructs indicated positive correlations exist, with the highest found between teacher engagement and resilience (r=.80) and the lowest between teacher adaptability and collective teacher efficacy (r=.22). Given the associations; we proceeded with CFA. The CFA yielded adequate fit: CFA fit: X (270, 1019) = 1836.79, p < .001, RMSEA = .04, and CFI = .94, TLI = .93 and SRMR = .04. All values were within the threshold values, indicating a good model fit. Results indicate that increasing teacher self-efficacy beliefs will increase a teacher’s level of engagement; that teacher ‘adaptability and resilience are positively associated with self-efficacy beliefs, as are collective teacher efficacy beliefs. Implications for school leaders and school systems: 1. investing in increasing teachers’ sense of efficacy beliefs to manage work demands; 2. leadership approaches can enhance teachers' adaptability and resilience; and 3. a culture of collective efficacy support. Preparing teachers for now and in the future offers an important reminder to policymakers and school leaders on the importance of supporting teachers’ personal attributes when faced with the challenging demands of the job.

Keywords: collective teacher efficacy, teacher self-efficacy, job demands, teacher engagement

Procedia PDF Downloads 123
49 A Simulation Study of Direct Injection Compressed Natural Gas Spark Ignition Engine Performance Utilizing Turbulent Jet Ignition with Controlled Air Charge

Authors: Siyamak Ziyaei, Siti Khalijah Mazlan, Petros Lappas

Abstract:

Compressed Natural Gas (CNG) mainly consists of Methane CH₄ and has a low carbon to hydrogen ratio relative to other hydrocarbons. As a result, it has the potential to reduce CO₂ emissions by more than 20% relative to conventional fuels like diesel or gasoline Although Natural Gas (NG) has environmental advantages compared to other hydrocarbon fuels whether they are gaseous or liquid, its main component, CH₄, burns at a slower rate than conventional fuels A higher pressure and a leaner cylinder environment will overemphasize slow burn characteristic of CH₄. Lean combustion and high compression ratios are well-known methods for increasing the efficiency of internal combustion engines. In order to achieve successful CNG lean combustion in Spark Ignition (SI) engines, a strong ignition system is essential to avoid engine misfires, especially in ultra-lean conditions. Turbulent Jet Ignition (TJI) is an ignition system that employs a pre-combustion chamber to ignite the lean fuel mixture in the main combustion chamber using a fraction of the total fuel per cycle. TJI enables ultra-lean combustion by providing distributed ignition sites through orifices. The fast burn rate provided by TJI enables the ordinary SI engine to be comparable to other combustion systems such as Homogeneous Charge Compression Ignition (HCCI) or Controlled Auto-Ignition (CAI) in terms of thermal efficiency, through the increased levels of dilution without the need of sophisticated control systems. Due to the physical geometry of TJIs, which contain small orifices that connect the prechamber to the main chamber, scavenging is one of the main factors that reduce TJI performance. Specifically, providing the right mixture of fuel and air has been identified as a key challenge. The reason for this is the insufficient amount of air that is pushed into the pre-chamber during each compression stroke. There is also the problem that combustion residual gases such as CO₂, CO and NOx from the previous combustion cycle dilute the pre- chamber fuel-air mixture preventing rapid combustion in the pre-chamber. An air-controlled active TJI is presented in this paper in order to address these issues. By applying air to the pre-chamber at a sufficient pressure, residual gases are exhausted, and the air-fuel ratio is controlled within the pre-chamber, thereby improving the quality of combustion. This paper investigates the 3D-simulated combustion characteristics of a Direct Injected (DI-CNG) fuelled SI en- gine with a pre-chamber equipped with an air channel by using AVL FIRE software. Experiments and simulations were performed at the Worldwide Mapping Point (WWMP) at 1500 Revolutions Per Minute (RPM), 3.3 bar Indicated Mean Effective Pressure (IMEP), using only conventional spark plugs as the baseline. After validating simulation data, baseline engine conditions were set for all simulation scenarios at λ=1. Following that, the pre-chambers with and without an auxiliary fuel supply were simulated. In the simulated (DI-CNG) SI engine, active TJI was observed to perform better than passive TJI and spark plug. In conclusion, the active pre-chamber with an air channel demon-strated an improved thermal efficiency (ηth) over other counterparts and conventional spark ignition systems.

Keywords: turbulent jet ignition, active air control turbulent jet ignition, pre-chamber ignition system, active and passive pre-chamber, thermal efficiency, methane combustion, internal combustion engine combustion emissions

Procedia PDF Downloads 85
48 Heat Transfer Phenomena Identification of a Non-Active Floor in a Stack-Ventilated Building in Summertime: Empirical Study

Authors: Miguel Chen Austin, Denis Bruneau, Alain Sempey, Laurent Mora, Alain Sommier

Abstract:

An experimental study in a Plus Energy House (PEH) prototype was conducted in August 2016. It aimed to highlight the energy charge and discharge of a concrete-slab floor submitted to the day-night-cycles heat exchanges in the southwestern part of France and to identify the heat transfer phenomena that take place in both processes: charge and discharge. The main features of this PEH, significant to this study, are the following: (i) a non-active slab covering the major part of the entire floor surface of the house, which include a concrete layer 68 mm thick as upper layer; (ii) solar window shades located on the north and south facades along with a large eave facing south, (iii) large double-glazed windows covering the majority of the south facade, (iv) a natural ventilation system (NVS) composed by ten automatized openings with different dimensions: four are located on the south facade, four on the north facade and two on the shed roof (north-oriented). To highlight the energy charge and discharge processes of the non-active slab, heat flux and temperature measurement techniques were implemented, along with airspeed measurements. Ten “measurement-poles” (MP) were distributed all over the concrete-floor surface. Each MP represented a zone of measurement, where air and surface temperatures, and convection and radiation heat fluxes, were intended to be measured. The airspeed was measured only at two points over the slab surface, near the south facade. To identify the heat transfer phenomena that take part in the charge and discharge process, some relevant dimensionless parameters were used, along with statistical analysis; heat transfer phenomena were identified based on this analysis. Experimental data, after processing, had shown that two periods could be identified at a glance: charge (heat gain, positive values) and discharge (heat losses, negative values). During the charge period, on the floor surface, radiation heat exchanges were significantly higher compared with convection. On the other hand, convection heat exchanges were significantly higher than radiation, in the discharge period. Spatially, both, convection and radiation heat exchanges are higher near the natural ventilation openings and smaller far from them, as expected. Experimental correlations have been determined using a linear regression model, showing the relation between the Nusselt number with relevant parameters: Peclet, Rayleigh, and Richardson numbers. This has led to the determination of the convective heat transfer coefficient and its comparison with the convective heat coefficient resulting from measurements. Results have shown that forced and natural convection coexists during the discharge period; more accurate correlations with the Peclet number than with the Rayleigh number, have been found. This may suggest that forced convection is stronger than natural convection. Yet, airspeed levels encountered suggest that it is natural convection that should take place rather than forced convection. Despite this, Richardson number values encountered indicate otherwise. During the charge period, air-velocity levels might indicate that none air motion occurs, which might lead to heat transfer by diffusion instead of convection.

Keywords: heat flux measurement, natural ventilation, non-active concrete slab, plus energy house

Procedia PDF Downloads 413
47 Distribution System Modelling: A Holistic Approach for Harmonic Studies

Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet

Abstract:

The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.

Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling

Procedia PDF Downloads 157
46 Forming Form, Motivation and Their Biolinguistic Hypothesis: The Case of Consonant Iconicity in Tashelhiyt Amazigh and English

Authors: Noury Bakrim

Abstract:

When dealing with motivation/arbitrariness, forming form (Forma Formans) and morphodynamics are to be grasped as relevant implications of enunciation/enactment, schematization within the specificity of language as sound/meaning articulation. Thus, the fact that a language is a form does not contradict stasis/dynamic enunciation (reflexivity vs double articulation). Moreover, some languages exemplify the role of the forming form, uttering, and schematization (roots in Semitic languages, the Chinese case). Beyond the evolutionary biosemiotic process (form/substance bifurcation, the split between realization/representation), non-isomorphism/asymmetry between linguistic form/norm and linguistic realization (phonetics for instance) opens up a new horizon problematizing the role of Brain – sensorimotor contribution in the continuous forming form. Therefore, we hypothesize biotization as both process/trace co-constructing motivation/forming form. Henceforth, referring to our findings concerning distribution and motivation patterns within Berber written texts (pulse based obstruents and nasal-lateral levels in poetry) and oral storytelling (consonant intensity clustering in quantitative and semantic/prosodic motivation), we understand consonant clustering, motivation and schematization as a complex phenomenon partaking in patterns of oral/written iconic prosody and reflexive metalinguistic representation opening the stable form. We focus our inquiry on both Amazigh and English clusters (/spl/, /spr/) and iconic consonant iteration in [gnunnuy] (to roll/tumble), [smummuy] (to moan sadly or crankily). For instance, the syllabic structures of /splaeʃ/ and /splaet/ imply an anamorphic representation of the state of the world: splash, impact on aquatic surfaces/splat impact on the ground. The pair has stridency and distribution as distinctive features which specify its phonetic realization (and a part of its meaning) /ʃ/ is [+ strident] and /t/ is [+ distributed] on the vocal tract. Schematization is then a process relating both physiology/code as an arthron vocal/bodily, vocal/practical shaping of the motor-articulatory system, leading to syntactic/semantic thematization (agent/patient roles in /spl/, /sm/ and other clusters or the tense uvular /qq/ at the initial position in Berber). Furthermore, the productivity of serial syllable sequencing in Berber points out different expressivity forms. We postulate two Components of motivated formalization: i) the process of memory paradigmatization relating to sequence modeling under sensorimotor/verbal specific categories (production/perception), ii) the process of phonotactic selection - prosodic unconscious/subconscious distribution by virtue of iconicity. Basing on multiple tests including a questionnaire, phonotactic/visual recognition and oral/written reproduction, we aim at patterning/conceptualizing consonant schematization and motivation among EFL and Amazigh (Berber) learners and speakers integrating biolinguistic hypotheses.

Keywords: consonant motivation and prosody, language and order of life, anamorphic representation, represented representation, biotization, sensori-motor and brain representation, form, formalization and schematization

Procedia PDF Downloads 142
45 Investigation of the Possible Beneficial and Protective Effects of an Ethanolic Extract from Sarcopoterium spinosum Fruits

Authors: Hawraa Zbeeb, Hala Khalifeh, Mohamad Khalil, Francesca Storace, Francesca Baldini, Giulio Lupidi, Laura Vergani

Abstract:

Sarcopoterium spinosum, a widely distributed spiny shrub belonging to the Rosaceae family, is rich in essential and beneficial constituents. In fact, S. spinosum fruits and roots are traditionally used as herbal medicine in the eastern Mediterranean landscape, and this shrub is mentioned as a medicinal plant in a large number of ethnobotanical surveys. Aqueous root extracts from S. spinosum are used by traditional medicinal practitioners for weight loss treatment of diabetes and pain. Moreover, the anti-diabetic activity of S. spinosum root extract has been reported in different studies, but the beneficial effects of aerial parts, especially fruits, have not been elucidated yet. The aim of the present study was to investigate the in vitro antioxidant and lipid-lowering properties of an ethanolic extract from S. spinosum fruits using both hepatic (FaO) and endothelial (HECV) cells in an attempt to evaluate its possible employment as a nutraceutical supplement. First of all, in vitro spectrophotometric assays were employed to characterize the extract. The total phenol content (TPC) was evaluated by Folin–Ciocalteu spectrophotometric method and the radical scavenging activity was tested by 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2, 2'-azino-bis-3-ethylbenzothiazoline-6-sulfonic acid (ABTS) assays. After that, the beneficial effects of the extract were tested on cells. FaO cells treated for 3 hours with 0.75 mM oleate/palmitate mix (1:2 molar ratio) mimic in vitro a moderate hepato-steatosis. HECV cells exposed for 1 hour to 100 µM H₂O₂ mimic an oxidative insult leading to oxidative stress conditions. After the metabolic and oxidative insult, both cell lines were treated with increasing concentrations of the S. spinosum extract (1, 10, 25 µg/mL) for 24 hours. The results showed the S. spinosum ethanolic extract is rather rich in phenols (TPC of 18.6 mgGAE/g dry extracts). Moreover, the extract showed a good scavenging ability in vitro (IC₅₀ 15.9 µg/ml and 10.9 µg/ml measured by DPPH and ABTS assays, respectively). When the extract was tested on cells, the results showed that it could ameliorate some markers of cell dysfunction. The three concentrations of the extract led to a significant decrease in the intracellular triglyceride (TG) content in steatotic FaO cells measured by spectrophotometric assay. On the other hand, HECV cells treated with increasing concentrations of the extract did not result in a significant decrease in both lipid peroxidation measured by the Thiobarbituric Acid Reactive Substances (TBARS) assay, and in reactive oxygen species (ROS) production measured by fluorometric analysis after DCF staining. Interestingly, the ethanolic extract was able to accelerate the wound repair of confluent HECV cells with respect to H₂O₂-insulted cells as measured by T-scratch assay. Taken together, these results seem to indicate that the ethanol extract from S. spinosum fruits is rich in phenol compounds and plays considerable lipid-lowering activity in vitro on steatotic hepatocytes and accelerates wound healing repair on endothelial cells. In light of that, the ethanolic extract from S. spinosum fruits could be a potential candidate for nutraceutical applications.

Keywords: antioxidant activity, ethanolic extract, lipid-lowering activity, phenolic compounds, Sarcopoterium spinosum fruits

Procedia PDF Downloads 173
44 A Case Study of Brownfield Revitalization in Taiwan

Authors: Jen Wang, Wei-Chia Hsu, Zih-Sin Wang, Ching-Ping Chu, Bo-Shiou Guo

Abstract:

In the late 19th century, the Jinguashi ore deposit in northern Taiwan was discovered, and accompanied with flourishing mining activities. However, tons of contaminants including heavy metals, sulfur dioxide, and total petroleum hydrocarbons (TPH) were released to surroundings and caused environmental problems. Site T was one of copper smelter located on the coastal hill near Jinguashi ore deposit. In over ten years of operation, variety contaminants were emitted that it polluted the surrounding soil and groundwater quality. In order to exhaust fumes produced from smelting process, three stacks were built along the hill behind the factory. The sediment inside the stacks contains high concentration of heavy metals such as arsenic, lead, copper, etc. Moreover, soil around the discarded stacks suffered a serious contamination when deposition leached from the ruptures of stacks. Consequently, Site T (including the factory and its surroundings) was declared as a pollution remediation site that visiting the site and land-use activities on it are forbidden. However, the natural landscape and cultural attractions of Site T are spectacular that it attracts a lot of visitors annually. Moreover, land resources are extremely precious in Taiwan. In addition, Taiwan Environmental Protection Administration (EPA) is actively promoting the contaminated land revitalization policy. Therefore, this study took Site T as case study for brownfield revitalization planning to the limits of activate and remediate the natural resources. Land-use suitability analysis and risk mapping were applied in this study to make appropriate risk management measures and redevelopment plan for the site. In land-use suitability analysis, surrounding factors into consideration such as environmentally sensitive areas, biological resources, land use, contamination, culture, and landscapes were collected to assess the development of each area; health risk mapping was introduced to show the image of risk assessments results based on the site contamination investigation. According to land-use suitability analysis, the site was divided into four zones: priority area (for high-efficiency development), secondary area (for co-development with priority area), conditional area (for reusing existing building) and limited area (for Eco-tourism and education). According to the investigation, polychlorinated biphenyls (PCB), heavy metals and TPH were considered as target contaminants while oral, inhalation and dermal would be the major exposure pathways in health risk assessment. In accordance with health risk map, the highest risk was found in the southwest and eastern side. Based on the results, the development plan focused on zoning and land use. Site T was recommended be divides to public facility zone, public architectonic art zone, viewing zone, existing building preservation zone, historic building zone, and cultural landscape zone for various purpose. In addition, risk management measures including sustained remediation, extinguish exposure and administration management are applied to ensure particular places are suitable for visiting and protect the visitors’ health. The consolidated results are corroborated available by analyzing aspects of law, land acquired method, maintenance and management and public participation. Therefore, this study has a certain reference value to promote the contaminated land revitalization policy in Taiwan.

Keywords: brownfield revitalization, land-use suitability analysis, health risk map, risk management

Procedia PDF Downloads 182
43 The Impact of Riparian Alien Plant Removal on Aquatic Invertebrate Communities in the Upper Reaches of Luvuvhu River Catchment, Limpopo Province

Authors: Rifilwe Victor Modiba, Stefan Hendric Foord

Abstract:

Alien invasive plants (IAP’s) have considerable negative impacts on freshwater habitats and South Africa has implemented an innovative Work for Water (WfW) programme for the systematic removal of these plants aimed at, amongst other objectives, restoring biodiversity and ecosystem services in these threatened habitats. These restoration processes are expensive and have to be evidence-based. In this study in-stream macroinvertebrate and adult Odonata assemblages were used as indicators of restoration success by quantifying the response of biodiversity metrics for these two groups to the removal of IAP’s in a strategic water resource of South Africa that is extensively invaded by invasive alien plants (IAP’s). The study consisted of a replicated design that included 45 sampling units, viz. 15 invaded, 15 uninvaded and 15 cleared sites stratified across the upper reaches of six sub-catchments of the Luvuvhu river catchment, Limpopo Province. Cleared sites were only considered if they received at least two WfW treatments in the last 3 years. The Benthic macroinvertebrate and adult Odonate assemblages in each of these sampling were surveyed from between November and March, 2013/2014 and 2014/2015 respectively. Generalized Linear Models (GLM) with a log link function and Poisson error distribution were done for metrics (invaded, cleared, and uninvaded) whose residuals were not normally distributed or had unequal variance and for abundance. RDA was done for EPTO genera (Ephemeroptera, Plecoptera, Trichoptera and Odonata) and adult Odonata species abundance. GLM was done to for the abundance of Genera and Odonates that had the association with the RDA environmental factors. Sixty four benthic macroinvertebrate families, 57 EPTO genera, and 45 adult Odonata species were recorded across all 45 sampling units. There was no significant difference between the SASS5 total score, ASPT, and family richness of the three invasion classes. Although clearing only had a weak positive effect on the adult Odonate species richness it had a positive impact on DBI scores. These differences were mainly the result of significantly larger DBI scores in the cleared sites as compared to the invaded sites. Results suggest that water quality is positively impacted by repeated clearing pointing to the importance of follow up procedures after initial clearing. Adult Odonate diversity as measured by richness, endemicity, threat and distribution respond positively to all forms of the clearing. The clearing had a significant impact on Odonate assemblage structure but did not affect EPTO structure. Variation partitioning showed that 21.8% of the variation in EPTO assemblage can be explained by spatial and environmental variables, 16% of the variation in Odonate structure was explained by spatial and environmental variables. The response of the diversity metrics to clearing increased in significance at finer taxonomic resolutions, particularly of adult Odonates whose metrics significantly improved with clearing and whose structure responded to both invasion and clearing. The study recommends the use of DBI for surveying river health when hydraulic biotopes are poor.

Keywords: DBI, evidence-based conservation, EPTO, macroinvetebrates

Procedia PDF Downloads 184
42 Fair Federated Learning in Wireless Communications

Authors: Shayan Mohajer Hamidi

Abstract:

Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.

Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization

Procedia PDF Downloads 75
41 Trajectory Optimization for Autonomous Deep Space Missions

Authors: Anne Schattel, Mitja Echim, Christof Büskens

Abstract:

Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.

Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.

Procedia PDF Downloads 411
40 Prospective Museum Visitor Management Based on Prospect Theory: A Pragmatic Approach

Authors: Athina Thanou, Eirini Eleni Tsiropoulou, Symeon Papavassiliou

Abstract:

The problem of museum visitor experience and congestion management – in various forms - has come increasingly under the spotlight over the last few years, since overcrowding can significantly decrease the quality of visitors’ experience. Evidence suggests that on busy days the amount of time a visitor spends inside a crowded house museum can fall by up to 60% compared to a quiet mid-week day. In this paper we consider the aforementioned problem, by treating museums as evolving social systems that induce constraints. However, in a cultural heritage space, as opposed to the majority of social environments, the momentum of the experience is primarily controlled by the visitor himself. Visitors typically behave selfishly regarding the maximization of their own Quality of Experience (QoE) - commonly expressed through a utility function that takes several parameters into consideration, with crowd density and waiting/visiting time being among the key ones. In such a setting, congestion occurs when either the utility of one visitor decreases due to the behavior of other persons, or when costs of undertaking an activity rise due to the presence of other persons. We initially investigate how visitors’ behavioral risk attitudes, as captured and represented by prospect theory, affect their decisions in resource sharing settings, where visitors’ decisions and experiences are strongly interdependent. Different from the majority of existing studies and literature, we highlight that visitors are not risk neutral utility maximizers, but they demonstrate risk-aware behavior according to their personal risk characteristics. In our work, exhibits are organized into two groups: a) “safe exhibits” that correspond to less congested ones, where the visitors receive guaranteed satisfaction in accordance with the visiting time invested, and b) common pool of resources (CPR) exhibits, which are the most popular exhibits with possibly increased congestion and uncertain outcome in terms of visitor satisfaction. A key difference is that the visitor satisfaction due to CPR strongly depends not only on the invested time decision of a specific visitor, but also on that of the rest of the visitors. In the latter case, the over-investment in time, or equivalently the increased congestion potentially leads to “exhibit failure”, interpreted as the visitors gain no satisfaction from their observation of this exhibit due to high congestion. We present a framework where each visitor in a distributed manner determines his time investment in safe or CPR exhibits to optimize his QoE. Based on this framework, we analyze and evaluate how visitors, acting as prospect-theoretic decision-makers, respond and react to the various pricing policies imposed by the museum curators. Based on detailed evaluation results and experiments, we present interesting observations, regarding the impact of several parameters and characteristics such as visitor heterogeneity and use of alternative pricing policies, on scalability, user satisfaction, museum capacity, resource fragility, and operation point stability. Furthermore, we study and present the effectiveness of alternative pricing mechanisms, when used as implicit tools, to deal with the congestion management problem in the museums, and potentially decrease the exhibit failure probability (fragility), while considering the visitor risk preferences.

Keywords: museum resource and visitor management, congestion management, propsect theory, cyber physical social systems

Procedia PDF Downloads 181
39 Exploring the Cultural Values of Nursing Personnel Utilizing Hofstede's Cultural Dimensions

Authors: Ma Chu Jui

Abstract:

Culture plays a pivotal role in shaping societal responses to change and fostering adaptability. In the realm of healthcare provision, hospitals serve as dynamic settings molded by the cultural consciousness of healthcare professionals. This intricate interplay extends to their expectations of leadership, communication styles, and attitudes towards patient care. Recognizing the cultural inclinations of healthcare professionals becomes imperative in navigating this complex landscape. This study will utilize Hofstede's Value Survey Module 2013 (VSM 2013) as a comprehensive analytical tool. The targeted participants for this research are in-service nursing professionals with a tenure of at least three months, specifically employed in the nursing department of an Eastern hospital. This quantitative approach seeks to quantify diverse cultural tendencies among the targeted nursing professionals, elucidating not only abstract cultural concepts but also revealing their cultural inclinations across different dimensions. The study anticipates gathering between 400 to 500 responses, ensuring a robust dataset for a comprehensive analysis. The focused approach on nursing professionals within the Eastern hospital setting enhances the relevance and specificity of the cultural insights obtained. The research aims to contribute valuable knowledge to the understanding of cultural tendencies among in-service nursing personnel in the nursing department of this specific Eastern hospital. The VSM 2013 will be initially distributed to this specific group to collect responses, aiming to calculate scores on each of Hofstede's six cultural dimensions—Power Distance Index (PDI), Individualism vs. Collectivism (IDV), Uncertainty Avoidance Index (UAI), Masculinity vs. Femininity (MAS), Long-Term Orientation vs. Short-Term Normative Orientation (LTO), and Indulgence vs. Restraint (IVR). the study unveils a significant correlation between different cultural dimensions and healthcare professionals' tendencies in understanding leadership expectations through PDI, grasping behavioral patterns via IDV, acknowledging risk acceptance through UAI, and understanding their long-term and short-term behaviors through LTO. These tendencies extend to communication styles and attitudes towards patient care. These findings provide valuable insights into the nuanced interconnections between cultural factors and healthcare practices. Through a detailed analysis of the varying levels of these cultural dimensions, we gain a comprehensive understanding of the predominant inclinations among the majority of healthcare professionals. This nuanced perspective adds depth to our comprehension of how cultural values shape their approach to leadership, communication, and patient care, contributing to a more holistic understanding of the healthcare landscape. A profound comprehension of the cultural paradigms embraced by healthcare professionals holds transformative potential. Beyond a mere understanding, it acts as a catalyst for elevating the caliber of healthcare services. This heightened awareness fosters cohesive collaboration among healthcare teams, paving the way for the establishment of a unified healthcare ethos. By cultivating shared values, our study envisions a healthcare environment characterized by enhanced quality, improved teamwork, and ultimately, a more favorable and patient-centric healthcare landscape. In essence, our research underscores the critical role of cultural awareness in shaping the future of healthcare delivery.

Keywords: hofstede's cultural, cultural dimensions, cultural values in healthcare, cultural awareness in nursing

Procedia PDF Downloads 64
38 The Healing 'Touch' of Music: A Neuro-Acoustics Approach to Understand Its Therapeutic Effect

Authors: Jagmeet S. Kanwal, Julia F. Langley

Abstract:

Music can heal the body, but a mechanistic understanding of this phenomenon is lacking. This study explores the effects of music presentation on neurologic and physiologic responses leading to metabolic changes in the human body. The mind and body co-exist in a corporeal entity and within this framework, sickness ensues when the mind-body balance goes awry. It is further hypothesized that music has the capacity to directly reset this balance. Two lines of inquiry taken together can provide a mechanistic understanding of this phenomenon 1) Empirical evidence for a sound-sensitive pressure sensor system in the body, and 2) The notion of a “healing center” within the brain that is activated by specific patterns of sounds. From an acoustics perspective, music is spatially distributed as pressure waves ranging from a few cm to several meters in wavelength. These waves interact and propagate in three-dimensions in unique ways, depending on the wavelength. Furthermore, music creates dynamically changing wave-fronts. Frequencies between 200 Hz and 1 kHz generate wavelengths that range from 5'6" to 1 foot. These dimensions are in the range of the body size of most people making it plausible that these pressure waves can geometrically interact with the body surface and create distinct patterns of pressure stimulation across the skin surface. For humans, short wavelength, high frequency (> 200 Hz) sounds are best received via cochlear receptors. For low frequency (< 200 Hz), long wavelength sound vibrations, however, the whole body may act as an ideal receiver. A vast array of highly sensitive pressure receptors (Pacinian corpuscles) is present just beneath the skin surface, as well as in the tendons, bones, several organs in the abdomen, and the sexual organs. Per the available empirical evidence, these receptors contribute to music perception by allowing the whole body to function as a sound receiver, and knowledge of how they function is essential to fully understanding the therapeutic effect of music. Neuroscientific studies have established that music stimulates the limbic system that can trigger states of anxiety, arousal, fear, and other emotions. These emotional states of brain activity play a crucial role in filtering top-down feedback from thoughts and bottom-up sensory inputs to the autonomic system, which automatically regulates bodily functions. Music likely exerts its pleasurable and healing effects by enhancing functional and effective connectivity and feedback mechanisms between brain regions that mediate reward, autonomic, and cognitive processing. Stimulation of pressure receptors under the skin by low-frequency music-induced sensations can activate multiple centers in the brain, including the amygdala, the cingulate cortex, and nucleus accumbens. Melodies in music in the low (< 600 Hz) frequency range may augment auditory inputs after convergence of the pressure-sensitive inputs from the vagus nerve onto emotive processing regions within the limbic system. The integration of music-generated auditory and somato-visceral inputs may lead to a synergistic input to the brain that promotes healing. Thus, music can literally heal humans through “touch” as it energizes the brain’s autonomic system for restoring homeostasis.

Keywords: acoustics, brain, music healing, pressure receptors

Procedia PDF Downloads 166
37 Making the Right Call for Falls: Evaluating the Efficacy of a Multi-Faceted Trust Wide Approach to Improving Patient Safety Post Falls

Authors: Jawaad Saleem, Hannah Wright, Peter Sommerville, Adrian Hopper

Abstract:

Introduction: Inpatient falls are the most commonly reported patient safety incidents, and carry a significant burden on resources, morbidity, and mortality. Ensuring adequate post falls management of patients by staff is therefore paramount to maintaining patient safety especially in out of hours and resource stretched settings. Aims: This quality improvement project aims to improve the current practice of falls management at Guys St Thomas Hospital, London as compared to our 2016 Quality Improvement Project findings. Furthermore, it looks to increase current junior doctors confidence in managing falls and their use of new guidance protocols. Methods: Multifaceted Interventions implemented included: the development of new trust wide guidelines detailing management pathways for patients post falls, available for intranet access. Furthermore, the production of 2000 lanyard cards distributed amongst junior doctors and staff which summarised these guidelines. Additionally, a ‘safety signal’ email was sent from the Trust chief medical officer to all staff raising awareness of falls and the guidelines. Formal falls teaching was also implemented for new doctors at induction. Using an established incident database, 189 consecutive falls in 2017were retrospectively analysed electronically to assess and compared to the variables measured in 2016 post interventions. A separate serious incident database was used to analyse 50 falls from May 2015 to March 2018 to ascertain the statistical significance of the impact of our interventions on serious incidents. A similar questionnaire for the 2017 cohort of foundation year one (FY1) doctors was performed and compared to 2016 results. Results: Questionnaire data demonstrated improved awareness and utility of guidelines and increased confidence as well as an increase in training. 97% of FY1 trainees felt that the interventions had increased their awareness of the impact of falls on patients in the trust. Data from the incident database demonstrated the time to review patients post fall had decreased from an average of 130 to 86 minutes. Improvement was also demonstrated in the reduced time to order and schedule X-ray and CT imaging, 3 and 5 hours respectively. Data from the serious incident database show that ‘the time from fall until harm was detected’ was statistically significantly lower (P = 0.044) post intervention. We also showed the incidence of significant delays in detecting harm ( > 10 hours) reduced post intervention. Conclusions: Our interventions have helped to significantly reduce the average time to assess, order and schedule appropriate imaging post falls. Delays of over ten hours to detect serious injuries after falls were commonplace; since the intervention, their frequency has markedly reduced. We suggest this will lead to identifying patient harm sooner, reduced clinical incidents relating to falls and thus improve overall patient safety. Our interventions have also helped increase clinical staff confidence, management, and awareness of falls in the trust. Next steps include expanding teaching sessions, improving multidisciplinary team involvement to aid this improvement.

Keywords: patient safety, quality improvement, serious incidents, falls, clinical care

Procedia PDF Downloads 123
36 Effect of Coated Sodium Butyrate (CM3000®) On Zootechnical Performance, Immune Status and Necrotic Enteritis After Experimental Infection of Broiler Chickens

Authors: Mohamed Ahmed Tony, Mohamed Hamoud

Abstract:

The present study was conducted to determine the effect of commercially coated slow-release sodium butyrate (CM3000®) as a feed additive on zootechnical performance, immune status and Clostridium perfringens severity after experimental infection. Three hundred 1-d-old broiler chicks (Cobb 500) were randomly distributed into 3 treatment groups (4 replicates each) using 25 chicks per replicate on floor pens. Control (C) birds were offered non-supplemented basal diets. Treatments 1 and 2 (T1 and T2) were fed diets containing CM3000® at 300 and 500 g/ton feed, respectively, during the entire experimental period (35 days). Feed and water were offered ad-libitum. Feed consumption and body weight were recorded weekly to calculate body weight gain and feed conversion. Blood samples were collected to evaluate the immune status of the birds against Newcastle disease vaccines using HI test. At the end of the experimental period, 20 birds were chosen randomly from each group (5 birds from each pen) to compare carcass yield. At day 16 of age 20 birds from each group (5 birds/replicate) were bacteriologically examined and proved to be free from Clostridium perfringens. The isolated birds were challenged orally with 1 ml buffer containing 106 CFU/ml Clostridium perfringens local isolate and prepared from necrotic enteritis (NE) diseased farms. Birds were observed on a regular basis daily for any signs of NE. Birds that died in the challenged group were necropsied to determine the cause of death. On day 28 of age, the surviving chickens were killed by cervical dislocation and necropsied immediately. Intestinal tracts were removed and intestinal lesions were scored. Tissue samples of the duodenum, jejunum, ileum and cecum for histopathological examination were collected. All collected data were statistically analyzed using IBM SPSS® version 19 software for personal computers. Means were compared by one-way ANOVA (P<0.05) followed by the Duncan Post Hoc test. The results revealed that body weight gain was significantly (P<0.05) improved in chicks fed on both doses of CM3000® compared to the control one. Final body weight gain in T1 and T2 were 2064.94 and 2141.37 g/bird, respectively, while in the control group, the weight gain showed 1952.78 g/bird. In addition, supplementation of diets with CM3000® increased significantly feed intake (P<0.05). Total feed intake in T1 and T2 were 3186.32 and 3273.29 g/bird, respectively; however, feed intake in the control group recorded 3081.95 g/bird. The best feed conversion was recorded in T2 group (1.53). Feed conversion in the control and T1 groups were 1.58 and 1.54, respectively. Dressing percentage, liver weights and the other carcasses yields were not different between treatments. The butyrate significantly enhanced immune responses measured against Newcastle disease vaccines. Sodium butyrate significantly reduced NE lesions and healthy improved the intestinal tissues in the samples collected from T1 and T2-challenged chickens versus those collected from the control group. In conclusion, exogenous administration of slow-release butyrate (CM3000®) is capable of improving performance, enhancing immunity and NE disease resistance in broiler chickens.

Keywords: sodium butyrate, broiler chicken, zootechnical performance, immunity, necrotic enteritis

Procedia PDF Downloads 81
35 Runoff Estimates of Rapidly Urbanizing Indian Cities: An Integrated Modeling Approach

Authors: Rupesh S. Gundewar, Kanchan C. Khare

Abstract:

Runoff contribution from urban areas is generally from manmade structures and few natural contributors. The manmade structures are buildings; roads and other paved areas whereas natural contributors are groundwater and overland flows etc. Runoff alleviation is done by manmade as well as natural storages. Manmade storages are storage tanks or other storage structures such as soakways or soak pits which are more common in western and European countries. Natural storages are catchment slope, infiltration, catchment length, channel rerouting, drainage density, depression storage etc. A literature survey on the manmade and natural storages/inflow has presented percentage contribution of each individually. Sanders et.al. in their research have reported that a vegetation canopy reduces runoff by 7% to 12%. Nassif et el in their research have reported that catchment slope has an impact of 16% on bare standard soil and 24% on grassed soil on rainfall runoff. Infiltration being a pervious/impervious ratio dependent parameter is catchment specific. But a literature survey has presented a range of 15% to 30% loss of rainfall runoff in various catchment study areas. Catchment length and channel rerouting too play a considerable role in reduction of rainfall runoff. Ground infiltration inflow adds to the runoff where the groundwater table is very shallow and soil saturates even in a lower intensity storm. An approximate percent contribution through this inflow and surface inflow contributes to about 2% of total runoff volume. Considering the various contributing factors in runoff it has been observed during a literature survey that integrated modelling approach needs to be considered. The traditional storm water network models are able to predict to a fair/acceptable degree of accuracy provided no interaction with receiving water (river, sea, canal etc), ground infiltration, treatment works etc. are assumed. When such interactions are significant then it becomes difficult to reproduce the actual flood extent using the traditional discrete modelling approach. As a result the correct flooding situation is very rarely addressed accurately. Since the development of spatially distributed hydrologic model the predictions have become more accurate at the cost of requiring more accurate spatial information.The integrated approach provides a greater understanding of performance of the entire catchment. It enables to identify the source of flow in the system, understand how it is conveyed and also its impact on the receiving body. It also confirms important pain points, hydraulic controls and the source of flooding which could not be easily understood with discrete modelling approach. This also enables the decision makers to identify solutions which can be spread throughout the catchment rather than being concentrated at single point where the problem exists. Thus it can be concluded from the literature survey that the representation of urban details can be a key differentiator to the successful understanding of flooding issue. The intent of this study is to accurately predict the runoff from impermeable areas from urban area in India. A representative area has been selected for which data was available and predictions have been made which are corroborated with the actual measured data.

Keywords: runoff, urbanization, impermeable response, flooding

Procedia PDF Downloads 248
34 Laying the Proto-Ontological Conditions for Floating Architecture as a Climate Adaptation Solution for Rising Sea Levels: Conceptual Framework and Definition of a Performance Based Design

Authors: L. Calcagni, A. Battisti, M. Hensel, D. S. Hensel

Abstract:

Since the beginning of the 21st century, we have seen a dynamic growth of water-based (WB) architecture, mainly due to the increasing threat of floods caused by sea level rise and heavy rains, all correlated with climate change. At the same time, the shortage of land available for urban development also led architects, engineers, and policymakers to reclaim the seabed or to build floating structures. Furthermore, the drive to produce energy from renewable resources has expanded the sector of offshore research, mining, and energy industry which seeks new types of WB structures. In light of these considerations, the time is ripe to consider floating architecture as a full-fledged building typology. Currently, there is no universally recognized academic definition of a floating building. Research on floating architecture lacks a proper, commonly shared vocabulary and typology distinction. Moreover, there is no global international legal framework for urban development on water, and there is no structured performance based building design (PBBD) approach for floating architecture in most countries, let alone national regulatory systems. Thus, first of all, the research intends to overcome the semantic and typological issues through the conceptualization of floating architecture, laying the proto-ontological conditions for floating development, and secondly to identify the parameters to be considered in the definition of a specific PBBD framework, setting the scene for national planning strategies. The theoretical overview and re-semanticization process involve the attribution of a new meaning to the term floating architecture. This terminological work of semantic redetermination is carried out through a systematic literature review and involves quantitative and historical research as well as logical argumentation methods. As it is expected that floating urban development is most likely to take place as an extension of coastal areas, the needs and design criteria are definitely more similar to those of the urban environment than to those of the offshore industry. Therefore, the identification and categorization of parameters –looking towards the potential formation of a PBBD framework for floating development– takes the urban and architectural guidelines and regulations as the starting point, taking the missing aspects, such as hydrodynamics (i.e. stability and buoyancy) from the offshore and shipping regulatory frameworks. This study is carried out through an evidence-based assessment of regulatory systems that are effective in different countries around the world, addressing on-land and on-water architecture as well as offshore and shipping industries. It involves evidence-based research and logical argumentation methods. Overall, inhabiting water is proposed not only as a viable response to the problem of rising sea levels, thus as a resilient frontier for urban development, but also as a response to energy insecurity, clean water, and food shortages, environmental concerns, and urbanization, in line with Blue Economy principles and the Agenda 2030. This review shows how floating architecture is to all intents and purposes, an urban adaptation measure and a solution towards self-sufficiency and energy-saving objectives. Moreover, the adopted methodology is, to all extents, open to further improvements and integrations, thus not rigid and already completely determined. Along with new designs and functions that will come into play in the practice field, eventually, life on water will seem no more unusual than life on land, especially by virtue of the multiple advantages it provides not only to users but also to the environment.

Keywords: adaptation measures, building typology, floating architecture, performance based building design, rising sea levels

Procedia PDF Downloads 96
33 The Science of Health Care Delivery: Improving Patient-Centered Care through an Innovative Education Model

Authors: Alison C. Essary, Victor Trastek

Abstract:

Introduction: The current state of the health care system in the U.S. is characterized by an unprecedented number of people living with multiple chronic conditions, unsustainable rise in health care costs, inadequate access to care, and wide variation in health outcomes throughout the country. An estimated two-thirds of Americans are living with two or more chronic conditions, contributing to 75% of all health care spending. In 2013, the School for the Science of Health Care Delivery (SHCD) was charged with redesigning the health care system through education and research. Faculty in business, law, and public policy, and thought leaders in health care delivery, administration, public health and health IT created undergraduate, graduate, and executive academic programs to address this pressing need. Faculty and students work across disciplines, and with community partners and employers to improve care delivery and increase value for patients. Methods: Curricula apply content in health care administration and operations within the clinical context. Graduate modules are team-taught by faculty across academic units to model team-based practice. Seminars, team-based assignments, faculty mentoring, and applied projects are integral to student success. Cohort-driven models enhance networking and collaboration. This observational study evaluated two years of admissions data, and one year of graduate data to assess program outcomes and inform the current graduate-level curricula. Descriptive statistics includes means, percentages. Results: Fall 2013, the program received 51 applications. The mean GPA of the entering class of 37 students was 3.38. Ninety-seven percent of the fall 2013 cohort successfully completed the program (n=35). Sixty-six percent are currently employed in the health care industry (n=23). Of the remaining 12 graduates, two successfully matriculated to medical school; one works in the original field of study; four await results on the MCAT or DAT, and five were lost to follow up. Attrition of one student was attributed to non-academic reasons. Fall 2014, the program expanded to include both on-ground and online cohorts. Applications were evenly distributed between on-ground (n=70) and online (n=68). Thirty-eight students enrolled in the on-ground program. The mean GPA was 3.95. Ninety-five percent of students successfully completed the program (n=36). Thirty-six students enrolled in the online program. The mean GPA was 3.85. Graduate outcomes are pending. Discussion: Challenges include demographic variability between online and on-ground students; yet, both profiles are similar in that students intend to become change agents in the health care system. In the past two years, on-ground applications increased by 31%, persistence to graduation is > 95%, mean GPA is 3.67, graduates report admission to six U.S. medical schools, the Mayo Medical School integrates SHCD content within their curricula, and there is national interest in collaborating on industry and academic partnerships. This places SHCD at the forefront of developing innovative curricula in order to improve high-value, patient-centered care.

Keywords: delivery science, education, health care delivery, high-value care, innovation in education, patient-centered

Procedia PDF Downloads 282
32 Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre of Visual Arts in the Context of Agile, Lean and Hybrid Project Management Approaches

Authors: Maria Ledinskaya

Abstract:

This paper examines the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts in the context of Agile, Lean, and Hybrid project management. It is part case study and part literature review. To date, relatively little has been written about non-traditional project management approaches in heritage conservation. This paper seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation, by referencing their practical application on a recent museum-based conservation project. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre for Visual Arts by private collectors Michael and Joyce Morris. The first part introduces the chronological timeline and key elements of the project. It describes a medium-size conservation project of moderate complexity, which was planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown condition and materials, unconfirmed budget. The project was also impacted by the unknown unknowns of the COVID-19 pandemic, such as indeterminate lockdowns, and the need to accommodate social distancing and remote communications. The author, a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Collection Conservation Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. Subsequent sections examine the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment, due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Collection Conservation Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, as well as the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics, particularly with respect to change management, bespoke ethics, shared decision-making, and value-based cost-benefit conservation strategy. The author concludes that the Morris Collection Conservation Project had multiple Agile and Lean features which were instrumental to the successful delivery of the project. These key features are identified as distributed decision making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point largely in favour of a Hybrid model which combines traditional and alternative project processes and tools to suit the specific needs of the project.

Keywords: project management, conservation, waterfall, agile, lean, hybrid

Procedia PDF Downloads 98
31 Broad Host Range Bacteriophage Cocktail for Reduction of Staphylococcus aureus as Potential Therapy for Atopic Dermatitis

Authors: Tamar Lin, Nufar Buchshtab, Yifat Elharar, Julian Nicenboim, Rotem Edgar, Iddo Weiner, Lior Zelcbuch, Ariel Cohen, Sharon Kredo-Russo, Inbar Gahali-Sass, Naomi Zak, Sailaja Puttagunta, Merav Bassan

Abstract:

Background: Atopic dermatitis (AD) is a chronic, relapsing inflammatory skin disorder that is characterized by dry skin and flares of eczematous lesions and intense pruritus. Multiple lines of evidence suggest that AD is associated with increased colonization by Staphylococcus aureus, which contributes to disease pathogenesis through the release of virulence factors that affect both keratinocytes and immune cells, leading to disruption of the skin barrier and immune cell dysfunction. The aim of the current study is to develop a bacteriophage-based product that specifically targets S. aureus. Methods: For the discovery of phage, environmental samples were screened on 118 S. aureus strains isolated from skin samples, followed by multiple enrichment steps. Natural phages were isolated, subjected to Next-generation Sequencing (NGS), and analyzed using proprietary bioinformatics tools for undesirable genes (toxins, antibiotic resistance genes, lysogeny potential), taxonomic classification, and purity. Phage host range was determined by an efficiency of plating (EOP) value above 0.1 and the ability of the cocktail to completely lyse liquid bacterial culture under different growth conditions (e.g., temperature, bacterial stage). Results: Sequencing analysis demonstrated that the 118 S. aureus clinical strains were distributed across the phylogenetic tree of all available Refseq S. aureus (~10,750 strains). Screening environmental samples on the S. aureus isolates resulted in the isolation of 50 lytic phages from different genera, including Silviavirus, Kayvirus, Podoviridae, and a novel unidentified phage. NGS sequencing confirmed the absence of toxic elements in the phages’ genomes. The host range of the individual phages, as measured by the efficiency of plating (EOP), ranged between 41% (48/118) to 79% (93/118). Host range studies in liquid culture revealed that a subset of the phages can infect a broad range of S. aureus strains in different metabolic states, including stationary state. Combining the single-phage EOP results of selected phages resulted in a broad host range cocktail which infected 92% (109/118) of the strains. When tested in vitro in a liquid infection assay, clearance was achieved in 87% (103/118) of the strains, with no evidence of phage resistance throughout the study (24 hours). A S. aureus host was identified that can be used for the production of all the phages in the cocktail at high titers suitable for large-scale manufacturing. This host was validated for the absence of contaminating prophages using advanced NGS methods combined with multiple production cycles. The phages are produced under optimized scale-up conditions and are being used for the development of a topical formulation (BX005) that may be administered to subjects with atopic dermatitis. Conclusions: A cocktail of natural phages targeting S. aureus was effective in reducing bacterial burden across multiple assays. Phage products may offer safe and effective steroid-sparing options for atopic dermatitis.

Keywords: atopic dermatitis, bacteriophage cocktail, host range, Staphylococcus aureus

Procedia PDF Downloads 151
30 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population

Authors: Ye Xue, Zhenhua Deng

Abstract:

Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.

Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool

Procedia PDF Downloads 57
29 The Safe Introduction of Tocilizumab for the Treatment of SARS-CoV-2 Pneumonia at an East London District General Hospital

Authors: Andrew Read, Alice Parry, Kate Woods

Abstract:

Since the advent of the SARS-CoV-2 pandemic, the search for medications that can reduce mortality and morbidity has been a global research priority. Several multi-center trials have recently demonstrated improved mortality associated with the use of Tocilizumab, an interleukin-6 receptor antagonist, in patients with severe SARS-CoV-2 pneumonia. Initial data supported the administration in patients requiring respiratory support (non-invasive or invasive ventilation), but more recent data has shown benefit in all hypoxic patients. At the height of the second wave of COVID-19 infections in London, our hospital introduced the use of Tocilizumab for patients with severe COVID-19. Tocilizumab is licensed for use in chronic inflammatory conditions and has been associated with an increased risk of severe bacterial and fungal infections, as well as reactivation of chronic viral infections (e.g., hepatitis B). It is a specialist drug that suppresses the formation of C-reactive protein (CRP) for 6 – 12 weeks. It is not widely used by the general medical community. We aimed to assess Tocilizumab use in our hospital and to implement changes to the protocol as required to ensure administration was safe and appropriate. A retrospective study design was used to assess prescriptions over an initial 3-week period in both intensive care and on the medical wards. This amounted to a total of 13 patients. The initial data collection identified four key areas of concern: adherence to national and local inclusion & exclusion criteria; a collection of appropriate screening blood prior to administration; documentation of informed consent or best interest decision and documentation of Tocilizumab administration on patient discharge information, to alert future healthcare providers that typical measures of inflammation and infection, such as CRP, are unreliable for up to 3-months. Data were collected from electronic notes, blood results and observation charts, and cross referenced with pharmacy data. Initial results showed that all four key areas were completed in approximately 50% of cases. Of particular concern was adherence to exclusion criteria, such as current evidence of bacterial infection, and ensuring the correct screening blood was sent to exclude infections such as hepatitis. To remedy this and improve patient safety, the initial data was presented to relevant healthcare professionals. Subsequently, three interventions were introduced and education on each provided to hospital staff. An electronic ‘order set’ collating the appropriate screening blood was created simplifying the screening process. Pre-formed electronic documentation which can be inserted into the notes was created to provide a framework for consent discussions and reduce the time needed for junior doctors to complete this task. Additionally, a ‘Tocilizumab’ administration card was created and administered via pharmacy. This was distributed to each patient on discharge to ensure future healthcare professionals were aware of the potential effects of Tocilizumab administration, including suppression of CRP. Following these changes, repeat data collection over two months illustrated that each of the 4 safety aspects was met with a 100% success rate in every patient. Although this demonstrates good progress and effective interventions the challenge will be to maintain this progress. The audit data collection is ongoing

Keywords: education, patient safety , SARS-CoV-2, Tocilizumab

Procedia PDF Downloads 174
28 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 177
27 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence

Authors: Muhammad Bilal Shaikh

Abstract:

Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.

Keywords: multimodal AI, computer vision, NLP, mineral processing, mining

Procedia PDF Downloads 65
26 Glucose Uptake Rate of Insulin-Resistant Human Liver Carcinoma Cells (IR/HepG2) by Flavonoids from Enicostema littorale via IR/IRS1/AKT Pathway

Authors: Priyanka Mokashi, Aparna Khanna, Nancy Pandita

Abstract:

Diabetes mellitus is a chronic metabolic disorder which will be the 7th leading cause of death by 2030. The current line of treatment for the diabetes mellitus is oral antidiabetic drugs (biguanides, sulfonylureas, meglitinides, thiazolidinediones and alpha-glycosidase inhibitors) and insulin therapy depending upon the type 1 or type 2 diabetes mellitus. But, these treatments have their disadvantages, ranging from the developing of resistance to the drugs and adverse effects caused by them. Alternative to these synthetic agents, natural products provides a new insight for the development of more efficient and safe drugs due to their therapeutic values. Enicostema littorale blume (A. Raynal) is a traditional Indian plant belongs to the Gentianaceae family. It is widely distributed in Asia, Africa, and South America. There are few reports on Swrtiamarin, major component of this plant for its antidiabetic activity. However, the antidiabetic activity of flavonoids from E. littorale and their mechanism of action have not yet been elucidated. Flavonoids have a positive relationship with disease prevention and can act on various molecular targets and regulate different signaling pathways in pancreatic β-cells, adipocytes, hepatocytes and skeletal myofibers. They may exert beneficial effects in diabetes by (i) improving hyperglycemia through regulation of glucose metabolism in hepatocytes; (ii) enhancing insulin secretion and reducing apoptosis and promoting proliferation of pancreatic β-cells; (iii) increasing glucose uptake in hepatocytes, skeletal muscle and white adipose tissue (iv) reducing insulin resistance, inflammation and oxidative stress. Therefore, we have isolated four flavonoid rich fractions, Fraction A (FA), Fraction B (FB), Fraction C (FC), Fraction D (FD) from crude alcoholic hot (AH) extract from E. littorale, identified by LC/MS. Total eight flavonoids were identified on the basis of fragmentation pattern. Flavonoid FA showed the presence of swertisin, isovitexin, and saponarin; FB showed genkwanin, quercetin, isovitexin, FC showed apigenin, swertisin, quercetin, 5-O-glucosylswertisin and 5-O-glucosylisoswertisin whereas FD showed the presence of swertisin. Further, these fractions were assessed for their antidiabetic activity on stimulating glucose uptake in insulin-resistant HepG2 cell line model (IR/HepG2). The results showed that FD containing C-glycoside Swertisin has significantly increased the glucose uptake rate of IR/HepG2 cells at the concentration of 10 µg/ml as compared to positive control Metformin (0.5mM) which was determined by glucose oxidase- peroxidase method. It has been reported that enhancement of glucose uptake of cells occurs due the translocation of Glut4 vesicles to cell membrane through IR/IRS1/AKT pathway. Therefore, we have studied expressions of three genes IRS1, AKT and Glut4 by real-time PCR to evaluate whether they follow the same pathway or not. It was seen that the glucose uptake rate has increased in FD treated IR/HepG2 cells due to the activation of insulin receptor substrate-1 (IRS1) followed by protein kinase B (AKT) through phosphoinositide 3-kinase (PI3K) leading to translocation of Glut 4 vesicles to cell membrane, thereby enhancing glucose uptake and insulin sensitivity of insulin resistant HepG2 cells. Hence, the up-regulation indicated the mechanism of action through which FD (Swertisin) acts as antidiabetic candidate in the treatment of type 2 diabetes mellitus.

Keywords: E. littorale, glucose transporter, glucose uptake rate, insulin resistance

Procedia PDF Downloads 306