Search results for: yield criteria
87 Simple Finite-Element Procedure for Modeling Crack Propagation in Reinforced Concrete Bridge Deck under Repetitive Moving Truck Wheel Loads
Authors: Rajwanlop Kumpoopong, Sukit Yindeesuk, Pornchai Silarom
Abstract:
Modeling cracks in concrete is complicated by its strain-softening behavior which requires the use of sophisticated energy criteria of fracture mechanics to assure stable and convergent solutions in the finite-element (FE) analysis particularly for relatively large structures. However, for small-scale structures such as beams and slabs, a simpler approach relies on retaining some shear stiffness in the cracking plane has been adopted in literature to model the strain-softening behavior of concrete under monotonically increased loading. According to the shear retaining approach, each element is assumed to be an isotropic material prior to cracking of concrete. Once an element is cracked, the isotropic element is replaced with an orthotropic element in which the new orthotropic stiffness matrix is formulated with respect to the crack orientation. The shear transfer factor of 0.5 is used in parallel to the crack plane. The shear retaining approach is adopted in this research to model cracks in RC bridge deck with some modifications to take into account the effect of repetitive moving truck wheel loads as they cause fatigue cracking of concrete. First modification is the introduction of fatigue tests of concrete and reinforcing steel and the Palmgren-Miner linear criterion of cumulative damage in the conventional FE analysis. For a certain loading, the number of cycles to failure of each concrete or RC element can be calculated from the fatigue or S-N curves of concrete and reinforcing steel. The elements with the minimum number of cycles to failure are the failed elements. For the elements that do not fail, the damage is accumulated according to Palmgren-Miner linear criterion of cumulative damage. The stiffness of the failed element is modified and the procedure is repeated until the deck slab fails. The total number of load cycles to failure of the deck slab can then be obtained from which the S-N curve of the deck slab can be simulated. Second modification is the modification in shear transfer factor. Moving loading causes continuous rubbing of crack interfaces which greatly reduces shear transfer mechanism. It is therefore conservatively assumed in this study that the analysis is conducted with shear transfer factor of zero for the case of moving loading. A customized FE program has been developed using the MATLAB software to accomodate such modifications. The developed procedure has been validated with the fatigue test of the 1/6.6-scale AASHTO bridge deck under the applications of both fixed-point repetitive loading and moving loading presented in the literature. Results are in good agreement both experimental vs. simulated S-N curves and observed vs. simulated crack patterns. Significant contribution of the developed procedure is a series of S-N relations which can now be simulated at any desired levels of cracking in addition to the experimentally derived S-N relation at the failure of the deck slab. This permits the systematic investigation of crack propagation or deterioration of RC bridge deck which is appeared to be useful information for highway agencies to prolong the life of their bridge decks.Keywords: bridge deck, cracking, deterioration, fatigue, finite-element, moving truck, reinforced concrete
Procedia PDF Downloads 25886 International Coffee Trade in Solidarity with the Zapatista Rebellion: Anthropological Perspectives on Commercial Ethics within Political Antagonistic Movements
Authors: Miria Gambardella
Abstract:
The influence of solidarity demonstrations towards the Zapatista National Liberation Army has been constantly present over the years, both locally and internationally, guaranteeing visibility to the cause, shaping the movement’s choices, and influencing its hopes of impact worldwide. Most of the coffee produced by the autonomous cooperatives from Chiapas is exported, therefore making coffee trade the main income from international solidarity networks. The question arises about the implications of the relations established between the communities in resistance in Southeastern Mexico and international solidarity movements, specifically on the strategies adopted to conciliate army's demands for autonomy and economic asymmetries between Zapatista cooperatives producing coffee and European collectives who hold purchasing power. In order to deepen the inquiry on those topics, a year-long multi-site investigation was carried out. The first six months of fieldwork were based in Barcelona, where Zapatista coffee was first traded in Spain and where one of the historical and most important European solidarity groups can be found. The last six months of fieldwork were carried out directly in Chiapas, in contact with coffee producers, Zapatista political authorities, international activists as well as vendors, and the rest of the network implicated in coffee production, roasting, and sale. The investigation was based on qualitative research methods, including participatory observation, focus groups, and semi-structured interviews. The analysis did not only focus on retracing the steps of the market chain as if it could be considered a linear and unilateral process, but it rather aimed at exploring actors’ reciprocal perceptions, roles, and dynamics of power. Demonstrations of solidarity and the money circulation they imply aim at changing the system in place and building alternatives, among other things, on the economic level. This work analyzes the formulation of discourse and the organization of solidarity activities that aim at building opportunities for action within a highly politicized economic sphere to which access must be regularly legitimized. The meaning conveyed by coffee is constructed on a symbolic level by the attribution of moral criteria to transactions. The latter participate in the construction of imaginaries that circulate through solidarity movements with the Zapatista rebellion. Commercial exchanges linked to solidarity networks turned out to represent much more than monetary transactions. The social, cultural, and political spheres are invested by ethics, which penetrates all aspects of militant action. It is at this level that the boundaries of different collective actors connect, contaminating each other: merely following the money flow would have been limiting in order to account for a reality within which imaginary is one of the main currencies. The notions of “trust”, “dignity” and “reciprocity” are repeatedly mobilized to negotiate discontinuous and multidirectional flows in the attempt to balance and justify commercial relations in a politicized context that characterizes its own identity through demonizing “market economy” and its dehumanizing powers.Keywords: coffee trade, economic anthropology, international cooperation, Zapatista National Liberation Army
Procedia PDF Downloads 8885 Screening and Improved Production of an Extracellular β-Fructofuranosidase from Bacillus Sp
Authors: Lynette Lincoln, Sunil S. More
Abstract:
With the rising demand of sugar used today, it is proposed that world sugar is expected to escalate up to 203 million tonnes by 2021. Hydrolysis of sucrose (table sugar) into glucose and fructose equimolar mixture is catalyzed by β-D-fructofuranoside fructohydrolase (EC 3.2.1.26), commonly called as invertase. For fluid filled center in chocolates, preparation of artificial honey, as a sweetener and especially to ensure that food stuffs remain fresh, moist and soft for longer spans invertase is applied widely and is extensively being used. From an industrial perspective, properties such as increased solubility, osmotic pressure and prevention of crystallization of sugar in food products are highly desired. Screening for invertase does not involve plate assay/qualitative test to determine the enzyme production. In this study, we use a three-step screening strategy for identification of a novel bacterial isolate from soil which is positive for invertase production. The primary step was serial dilution of soil collected from sugarcane fields (black soil, Maddur region of Mandya district, Karnataka, India) was grown on a Czapek-Dox medium (pH 5.0) containing sucrose as the sole C-source. Only colonies with the capability to utilize/breakdown sucrose exhibited growth. Bacterial isolates released invertase in order to take up sucrose, splitting the disaccharide into simple sugars. Secondly, invertase activity was determined from cell free extract by measuring the glucose released in the medium at 540 nm. Morphological observation of the most potent bacteria was examined by several identification tests using Bergey’s manual, which enabled us to know the genus of the isolate to be Bacillus. Furthermore, this potent bacterial colony was subjected to 16S rDNA PCR amplification and a single discrete PCR amplicon band of 1500 bp was observed. The 16S rDNA sequence was used to carry out BLAST alignment search tool of NCBI Genbank database to obtain maximum identity score of sequence. Molecular sequencing and identification was performed by Xcelris Labs Ltd. (Ahmedabad, India). The colony was identified as Bacillus sp. BAB-3434, indicating to be the first novel strain for extracellular invertase production. Molasses, a by-product of the sugarcane industry is a dark viscous liquid obtained upon crystallization of sugar. An enhanced invertase production and optimization studies were carried out by one-factor-at-a-time approach. Crucial parameters such as time course (24 h), pH (6.0), temperature (45 °C), inoculum size (2% v/v), N-source (yeast extract, 0.2% w/v) and C-source (molasses, 4% v/v) were found to be optimum demonstrating an increased yield. The findings of this study reveal a simple screening method of an extracellular invertase from a rapidly growing Bacillus sp., and selection of best factors that elevate enzyme activity especially utilization of molasses which served as an ideal substrate and also as C-source, results in a cost-effective production under submerged conditions. The invert mixture could be a replacement for table sugar which is an economic advantage and reduce the tedious work of sugar growers. On-going studies involve purification of extracellular invertase and determination of transfructosylating activity as at high concentration of sucrose, invertase produces fructooligosaccharides (FOS) which possesses probiotic properties.Keywords: Bacillus sp., invertase, molasses, screening, submerged fermentation
Procedia PDF Downloads 23384 Challenges and Lessons of Mentoring Processes for Novice Principals: An Exploratory Case Study of Induction Programs in Chile
Authors: Carolina Cuéllar, Paz González
Abstract:
Research has shown that school leadership has a significant indirect effect on students’ achievements. In Chile, evidence has also revealed that this impact is stronger in vulnerable schools. With the aim of strengthening school leadership, public policy has taken up the challenge of enhancing capabilities of novice principals through the implementation of induction programs, which include a mentoring component, entrusting the task of delivering these programs to universities. The importance of using mentoring or coaching models in the preparation of novice school leaders has been emphasized in the international literature. Thus, it can be affirmed that building leadership capacity through partnership is crucial to facilitate cognitive and affective support required in the initial phase of the principal career, gain role clarification and socialization in context, stimulate reflective leadership practice, among others. In Chile, mentoring is a recent phenomenon in the field of school leadership and it is even more new in the preparation of new principals who work in public schools. This study, funded by the Chilean Ministry of Education, sought to explore the challenges and lessons arising from the design and implementation of mentoring processes which are part of the induction programs, according to the perception of the different actors involved: ministerial agents, university coordinators, mentors and novice principals. The investigation used a qualitative design, based on a study of three cases (three induction programs). The sources of information were 46 semi-structured interviews, applied in two moments (at the beginning and end of mentoring). Content analysis technique was employed. Data focused on the uniqueness of each case and the commonalities within the cases. Five main challenges and lessons emerged in the design and implementation of mentoring within the induction programs for new principals from Chilean public schools. They comprised the need of (i) developing a shared conceptual framework on mentoring among the institutions and actors involved, which helps align the expectations for the mentoring component within the induction programs, along with assisting in establishing a theory of action of mentoring that is relevant to the public school context; (ii) recognizing trough actions and decisions at different levels that the role of a mentor differs from the role of a principal, which challenge the idea that an effective principal will always be an effective mentor; iii) improving mentors’ selection and preparation processes trough the definition of common guiding criteria to ensure that a mentor takes responsibility for developing critical judgment of novice principals, which implies not limiting the mentor’s actions to assist in the compliance of prescriptive practices and standards; (iv) generating common evaluative models with goals, instruments and indicators consistent with the characteristics of mentoring processes, which helps to assess expected results and impact; and (v) including the design of a mentoring structure as an outcome of the induction programs, which helps sustain mentoring within schools as a collective professional development practice. Results showcased interwoven elements that entail continuous negotiations at different levels. Taking action will contribute to policy efforts aimed at professionalizing the leadership role in public schools.Keywords: induction programs, mentoring, novice principals, school leadership preparation
Procedia PDF Downloads 12783 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods
Authors: Dario Milani, Guido Morgenthal
Abstract:
Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method
Procedia PDF Downloads 26382 Deconstructing Reintegration Services for Survivors of Human Trafficking: A Feminist Analysis of Australian and Thai Government and Non-Government Responses
Authors: Jessica J. Gillies
Abstract:
Awareness of the tragedy that is human trafficking has increased exponentially over the past two decades. The four pillars widely recognised as global solutions to the problem are prevention, prosecution, protection, and partnership between government and non-government organisations. While ‘sex-trafficking’ initially received major attention, this focus has shifted to other industries that conceal broader experiences of exploitation. However, within the regions of focus for this study, namely Australia and Thailand, trafficking for the purpose of sexual exploitation remains the commonly uncovered narrative of criminal justice investigations. In these regions anti-trafficking action is characterised by government-led prevention and prosecution efforts; whereas protection and reintegration practices have received criticism. Typically, non-government organisations straddle the critical chasm between policy and practice; therefore, they are perfectly positioned to contribute valuable experiential knowledge toward understanding how both sectors can support survivors in the post-trafficking experience. The aim of this research is to inform improved partnerships throughout government and non-government post-trafficking services by illuminating gaps in protection and reintegration initiatives. This research will explore government and non-government responses to human trafficking in Thailand and Australia, in order to understand how meaning is constructed in this context and how the construction of meaning effects survivors in the post-trafficking experience. A qualitative, three-stage methodology was adopted for this study. The initial stage of enquiry consisted of a discursive analysis, in order to deconstruct the broader discourses surrounding human trafficking. The data included empirical papers, grey literature such as publicly available government and non-government reports, and anti-trafficking policy documents. The second and third stages of enquiry will attempt to further explore the findings of the discourse analysis and will focus more specifically on protection and reintegration in Australia and Thailand. Stages two and three will incorporate process observations in government and non-government survivor support services, and semi-structured interviews with employees and volunteers within these settings. Two key findings emerged from the discursive analysis. The first exposed conflicting feminist arguments embedded throughout anti-trafficking discourse. Informed by conflicting feminist discourses on sex-work, a discursive relationship has been constructed between sex-industry policy and anti-trafficking policy. In response to this finding, data emerging from the process observations and semi-structured interviews will be interpreted using a feminist theoretical framework. The second finding progresses from the construction in the first. The discursive construction of sex-trafficking appears to have had influence over perceptions of the legitimacy of survivors, and therefore the support they receive in the post-trafficking experience. For example; women who willingly migrate for employment in the sex-industry, and on arrival are faced with exploitative conditions, are not perceived to be deserving of the same support as a woman who is not coerced, but rather physically forced, into such circumstances, yet both meet the criteria for a victim of human trafficking. The forthcoming study is intended to contribute toward building knowledge and understanding around the implications of the construction of legitimacy; and contextualise this in reference to government led protection and reintegration support services for survivors in the post-trafficking experience.Keywords: Australia, government, human trafficking, non-government, reintegration, Thailand
Procedia PDF Downloads 11281 Psoriasis Diagnostic Test Development: Exploratory Study
Authors: Salam N. Abdo, Orien L. Tulp, George P. Einstein
Abstract:
The purpose of this exploratory study was to gather the insights into psoriasis etiology, treatment, and patient experience, for developing psoriasis and psoriatic arthritis diagnostic test. Data collection methods consisted of a comprehensive meta-analysis of relevant studies and psoriasis patient survey. Established meta-analysis guidelines were used for the selection and qualitative comparative analysis of psoriasis and psoriatic arthritis research studies. Only studies that clearly discussed psoriasis etiology, treatment, and patient experience were reviewed and analyzed, to establish a qualitative data base for the study. Using the insights gained from meta-analysis, an existing psoriasis patient survey was modified and administered to collect additional data as well as triangulate the results. The hypothesis is that specific types of psoriatic disease have specific etiology and pathophysiologic pattern. The following etiology categories were identified: bacterial, environmental/microbial, genetic, immune, infectious, trauma/stress, and viral. Additional results, obtained from meta-analysis and confirmed by patient survey, were the common age of onset (early to mid-20s) and type of psoriasis (plaque; mild; symmetrical; scalp, chest, and extremities, specifically elbows and knees). Almost 70% of patients reported no prescription drug use due to severe side effects and prohibitive cost. These results will guide the development of psoriasis and psoriatic arthritis diagnostic test. The significant number of medical publications classified psoriatic arthritis disease as inflammatory of an unknown etiology. Thus numerous meta-analyses struggle to report any meaningful conclusions since no definitive results have been reported to date. Therefore, return to the basics is an essential step to any future meaningful results. To date, medical literature supports the fact that psoriatic disease in its current classification could be misidentifying subcategories, which in turn hinders the success of studies conducted to date. Moreover, there has been an enormous commercial support to pursue various immune-modulation therapies, thus following a narrow hypothesis/mechanism of action that is yet to yield resolution of disease state. Recurrence and complications may be considered unacceptable in a significant number of these studies. The aim of the ongoing study is to focus on a narrow subgroup of patient population, as identified by this exploratory study via meta-analysis and patient survey, and conduct an exhaustive work up, aiming at mechanism of action and causality before proposing a cure or therapeutic modality. Remission in psoriasis has been achieved and documented in medical literature, such as immune-modulation, phototherapy, various over-the-counter agents, including salts and tar. However, there is no psoriasis and psoriatic arthritis diagnostic test to date, to guide the diagnosis and treatment of this debilitating and, thus far, incurable disease. Because psoriasis affects approximately 2% of population, the results of this study may affect the treatment and improve the quality of life of a significant number of psoriasis patients, potentially millions of patients in the United States alone and many more millions worldwide.Keywords: biologics, early diagnosis, etiology, immune disease, immune modulation therapy, inflammation skin disorder, phototherapy, plaque psoriasis, psoriasis, psoriasis classification, psoriasis disease marker, psoriasis diagnostic test, psoriasis marker, psoriasis mechanism of action, psoriasis treatment, psoriatic arthritis, psoriatic disease, psoriatic disease marker, psoriatic patient experience, psoriatic patient quality of life, remission, salt therapy, targeted immune therapy
Procedia PDF Downloads 11980 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example
Authors: Alena Nesterenko, Svetlana Petrikova
Abstract:
Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation
Procedia PDF Downloads 20879 The Prospects of Optimized KOH/Cellulose 'Papers' as Hierarchically Porous Electrode Materials for Supercapacitor Devices
Authors: Dina Ibrahim Abouelamaiem, Ana Jorge Sobrido, Magdalena Titirici, Paul R. Shearing, Daniel J. L. Brett
Abstract:
Global warming and scarcity of fossil fuels have had a radical impact on the world economy and ecosystem. The urgent need for alternative energy sources has hence elicited an extensive research for exploiting efficient and sustainable means of energy conversion and storage. Among various electrochemical systems, supercapacitors attracted significant attention in the last decade due to their high power supply, long cycle life compared to batteries and simple mechanism. Recently, the performance of these devices has drastically improved, as tuning of nanomaterials provided efficient charge and storage mechanisms. Carbon materials, in various forms, are believed to pioneer the next generation of supercapacitors due to their attractive properties that include high electronic conductivities, high surface areas and easy processing and functionalization. Cellulose has eco-friendly attributes that are feasible to replace man-made fibers. The carbonization of cellulose yields carbons, including activated carbon and graphite fibers. Activated carbons successively are the most exploited candidates for supercapacitor electrode materials that can be complemented with pseudocapacitive materials to achieve high energy and power densities. In this work, the optimum functionalization conditions of cellulose have been investigated for supercapacitor electrode materials. The precursor was treated with potassium hydroxide (KOH) at different KOH/cellulose ratios prior to the carbonization process in an inert nitrogen atmosphere at 850 °C. The chalky products were washed, dried and characterized with different techniques including transmission electron microscopy (TEM), x-ray tomography and nitrogen adsorption-desorption isotherms. The morphological characteristics and their effect on the electrochemical performances were investigated in two and three-electrode systems. The KOH/cellulose ratios of 0.5:1 and 1:1 exhibited the highest performances with their unique hierarchal porous network structure, high surface areas and low cell resistances. Both samples acquired the best results in three-electrode systems and coin cells with specific gravimetric capacitances as high as 187 F g-1 and 20 F g-1 at a current density of 1 A g-1 and retention rates of 72% and 70%, respectively. This is attributed to the morphology of the samples that constituted of a well-balanced micro-, meso- and macro-porosity network structure. This study reveals that the electrochemical performance doesn’t solely depend on high surface areas but also an optimum pore size distribution, specifically at low current densities. The micro- and meso-pore contribution to the final pore structure was found to dominate at low KOH loadings, reaching ‘equilibrium’ with macropores at the optimum KOH loading, after which macropores dictate the porous network. The wide range of pore sizes is detrimental for the mobility and penetration of electrolyte ions in the porous structures. These findings highlight the influence of various morphological factors on the double-layer capacitances and high performance rates. In addition, they open a platform for the investigation of the optimized conditions for double-layer capacitance that can be coupled with pseudocapacitive materials to yield higher energy densities and capacities.Keywords: carbon, electrochemical performance, electrodes, KOH/cellulose optimized ratio, morphology, supercapacitor
Procedia PDF Downloads 22178 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 19077 Reviving the Past, Enhancing the Future: Preservation of Urban Heritage Connectivity as a Tool for Developing Liveability in Historical Cities in Jordan, Using Salt City as a Case Study
Authors: Sahar Yousef, Chantelle Niblock, Gul Kacmaz
Abstract:
Salt City, in the context of Jordan’s heritage landscape, is a significant case to explore when it comes to the interaction between tangible and intangible qualities of liveable cities. Most city centers, including Jerash, Salt, Irbid, and Amman, are historical locations. Six of these extraordinary sites were designated UNESCO World Heritage Sites. Jordan is widely acknowledged as a developing country characterized by swift urbanization and unrestrained expansion that exacerbate the challenges associated with the preservation of historic urban areas. The aim of this study is to conduct an examination and analysis of the existing condition of heritage connectivity within heritage city centers. This includes outdoor staircases, pedestrian pathways, footpaths, and other public spaces. Case study-style analysis of the urban core of As-Salt is the focus of this investigation. Salt City is widely acknowledged for its substantial tangible and intangible cultural heritage and has been designated as ‘The Place of Tolerance and Urban Hospitality’ by UNESCO since 2021. Liveability in urban heritage, particularly in historic city centers, incorporates several factors that affect our well-being; its enhancement is a critical issue in contemporary society. The dynamic interaction between humans and historical materials, which serves as a vehicle for the expression of their identity and historical narrative, constitutes preservation that transcends simple conservation. This form of engagement enables people to appreciate the diversity of their heritage recognising their previous and planned futures. Heritage preservation is inextricably linked to a larger physical and emotional context; therefore, it is difficult to examine it in isolation. Urban environments, including roads, structures, and other infrastructure, are undergoing unprecedented physical design and construction requirements. Concurrently, heritage reinforces a sense of affiliation with a particular location or space and unifies individuals with their ancestry, thereby defining their identity. However, a considerable body of research has focused on the conservation of heritage buildings in a fragmented manner without considering their integration within a holistic urban context. Insufficient attention is given to the significance of the physical and social roles played by the heritage staircases and baths that serve as connectors between these valued historical buildings. In doing so, the research uses a methodology that is based on consensus. Given that liveability is considered a complex matter with several dimensions. The discussion starts by making initial observations on the physical context and societal norms inside the urban center while simultaneously establishing the definitions of liveability and connectivity and examining the key criteria associated with these concepts. Then, identify the key elements that contribute to liveable connectivity within the framework of urban heritage in Jordanian city centers. Some of the outcomes that will be discussed in the presentation are: (1) There is not enough connectivity between heritage buildings as can be seen, for example, between buildings in Jada and Qala'. (2) Most of the outdoor spaces suffer from physical issues that hinder their use by the public, like in Salalem. (3) Existing activities in the city center are not well attended because of lack of communication between the organisers and the citizens.Keywords: connectivity, Jordan, liveability, salt city, tangible and intangible heritage, urban heritage
Procedia PDF Downloads 7276 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley
Authors: Bijit Kalita, K. V. N. Surendra
Abstract:
The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor
Procedia PDF Downloads 12575 Thematic Analysis of Ramayana Narrative Scroll Paintings: A Need for Knowledge Preservation
Authors: Shatarupa Thakurta Roy
Abstract:
Along the limelight of mainstream academic practices in Indian art, exist a significant lot of habitual art practices that are mutually susceptible in their contemporary forms. Narrative folk paintings of regional India has successfully dispersed to its audience social messages through pulsating pictures and orations. The paper consists of images from narrative scroll paintings on ‘Ramayana’ theme from various neighboring states as well as districts in India, describing their subtle differences in style of execution, method, and use of material. Despite sharing commonness in the choice of subject matter, habitual and ceremonial Indian folk art in its formative phase thrived within isolated locations to yield in remarkable variety in the art styles. The differences in style took place district wise, cast wise and even gender wise. An open flow is only evident in the contemporary expressions as a result of substantial changes in social structures, mode of communicative devices, cross-cultural exposures and multimedia interactivities. To decipher the complex nature of popular cultural taste of contemporary India it is important to categorically identify its root in vernacular symbolism. The realization of modernity through European primitivism was rather elevated as a perplexed identity in Indian cultural margin in the light of nationalist and postcolonial ideology. To trace the guiding factor that has still managed to obtain ‘Indianness’ in today’s Indian art, researchers need evidences from the past that are yet to be listed in most instances. They are commonly created on ephemeral foundations. The artworks are also found in endangered state and hence, not counted much friendly for frequent handling. The museums are in dearth of proper technological guidelines to preserve them. Even though restoration activities are emerging in the country, the existing withered and damaged artworks are in threat to perish. An immediacy of digital achieving is therefore envisioned as an alternative to save this cultural legacy. The method of this study is, two folded. It primarily justifies the richness of the evidences by conducting categorical aesthetic analysis. The study is supported by comments on the stylistic variants, thematic aspects, and iconographic identities alongside its anthropological and anthropomorphic significance. Further, it explores the possible ways of cultural preservation to ensure cultural sustainability that includes technological intervention in the form of digital transformation as an altered paradigm for better accessibility to the available recourses. The study duly emphasizes on visual description in order to culturally interpret and judge the rare visual evidences following Feldman’s four-stepped method of formal analysis combined with thematic explanation. A habitual design that emerges and thrives within complex social circumstances may experience change placing its principle philosophy at risk by shuffling and altering with time. A tradition that respires in the modern setup struggles to maintain timeless values that operate its creative flow. Thus, the paper hypothesizes the survival and further growth of this practice within the dynamics of time and concludes in realization of the urgency to transform the implicitness of its knowledge into explicit records.Keywords: aesthetic, identity, implicitness, paradigm
Procedia PDF Downloads 37174 The Istrian Istrovenetian-Croatian Bilingual Corpus
Authors: Nada Poropat Jeletic, Gordana Hrzica
Abstract:
Bilingual conversational corpora represent a meaningful and the most comprehensive data source for investigating the genuine contact phenomena in non-monitored bi-lingual speech productions. They can be particularly useful for bilingual research since some features of bilingual interaction can hardly be accessed with more traditional methodologies (e.g., elicitation tasks). The method of language sampling provides the resources for describing language interaction in a bilingual community and/or in bilingual situations (e.g. code-switching, amount of languages used, number of languages used, etc.). To capture these phenomena in genuine communication situations, such sampling should be as close as possible to spontaneous communication. Bilingual spoken corpus design is methodologically demanding. Therefore this paper aims at describing the methodological challenges that apply to the corpus design of the conversational corpus design of the Istrian Istrovenetian-Croatian Bilingual Corpus. Croatian is the first official language of the Croatian-Italian officially bilingual Istria County, while Istrovenetian is a diatopic subvariety of Venetian, a longlasting lingua franca in the Istrian peninsula, the mother tongue of the members of the Italian National Community in Istria and the primary code of informal everyday communication among the Istrian Italophone population. Within the CLARIN infrastructure, TalkBank is being used, as it provides relevant procedures for designing and analyzing bilingual corpora. Furthermore, it allows public availability allows for easy replication of studies and cumulative progress as a research community builds up around the corpus, while the tools developed within the field of corpus linguistics enable easy retrieval and analysis of information. The method of language sampling employed is kept at the level of spontaneous communication, in order to maximise the naturalness of the collected conversational data. All speakers have provided written informed consent in which they agree to be recorded at a random point within the period of one month after signing the consent. Participants are administered a background questionnaire providing information about the socioeconomic status and the exposure and language usage in the participants social networks. Recording data are being transcribed, phonologically adapted within a standard-sized orthographic form, coded and segmented (speech streams are being segmented into communication units based on syntactic criteria) and are being marked following the CHAT transcription system and its associated CLAN suite of programmes within the TalkBank toolkit. The corpus consists of transcribed sound recordings of 36 bilingual speakers, while the target is to publish the whole corpus by the end of 2020, by sampling spontaneous conversations among approximately 100 speakers from all the bilingual areas of Istria for ensuring representativeness (the participants are being recruited across three generations of native bilingual speakers in all the bilingual areas of the peninsula). Conversational corpora are still rare in TalkBank, so the Corpus will contribute to BilingBank as a highly relevant and scientifically reliable resource for an internationally established and active research community. The impact of the research of communities with societal bilingualism will contribute to the growing body of research on bilingualism and multilingualism, especially regarding topics of language dominance, language attrition and loss, interference and code-switching etc.Keywords: conversational corpora, bilingual corpora, code-switching, language sampling, corpus design methodology
Procedia PDF Downloads 14673 Illness-Related PTSD Among Type 1 Diabetes Patients
Authors: Omer Zvi Shaked, Amir Tirosh
Abstract:
Type 1 Diabetes (T1DM) is an incurable chronic illness with no known preventive measures. Excess to insulin therapy can lead to hypoglycemia with neuro-glycogenic symptoms such as shakiness, nausea, sweating, irritability, fatigue, excessive thirst or hunger, weakness, seizure, and coma. Severe Hypoglycemia (SH) is also considered a most aversive event since it may put patients at risk for injury and death, which matches the criteria of a traumatic event. SH has a ranging prevalence of 20%, which makes it a primary medical Issue. One of the results of SH is an intense emotional fear reaction resembling the form of post-traumatic stress symptoms (PTS), causing many patients to avoid insulin therapy and social activities in order to avoid the possibility of hypoglycemia. As a result, they are at risk for irreversible health deterioration and medical complications. Fear of Hypoglycemia (FOH) is, therefore, a major disturbance for T1DM patients. FOH differs from prevalent post-traumatic stress reactions to other forms of traumatic events since the threat to life continuously exists in the patient's body. That is, it is highly probable that orthodox interventions may not be sufficient for helping patients after SH to regain healthy social function and proper medical treatment. Accordingly, the current presentation will demonstrate the results of a study conducted among T1DM patients after SH. The study was designed in two stages. First, a preliminary qualitative phenomenological study among ten patients after SH was conducted. Analysis revealed that after SH, patients confuse between stress symptoms and Hypoglycemia symptoms, divide life before and after the event, report a constant sense of fear, a loss of freedom, a significant decrease in social functioning, a catastrophic thinking pattern, a dichotomous split between the self and the body, and internalization of illness identity, a loss of internal locus of control, a damaged self-representation, and severe loneliness for never being understood by others. The second stage was a two steps study of intervention among five patients after SH. The first part of the intervention included three months of therapeutic 3rd wave CBT therapy. The contents of the therapeutic process were: acceptance of fear and tolerance to stress; cognitive de-fusion combined with emotional self-regulation; the adoption of an active position relying on personal values; and self-compassion. Then, the intervention included a one-week practical real-time 24/7 support by trained medical personnel, alongside a gradual exposure to increased insulin therapy in a protected environment. The results of the intervention are a decrease in stress symptoms, increased social functioning, increased well-being, and decreased avoidance of medical treatment. The presentation will discuss the unique emotional state of T1DM patients after SH. Then, the presentation will discuss the effectiveness of the intervention for patients with chronic conditions after a traumatic event. The presentation will make evident the unique situation of illness-related PTSD. The presentation will also demonstrate the requirement for multi-professional collaboration between social work and medical care for populations with chronic medical conditions. Limitations of the study and recommendations for further research will be discussed.Keywords: type 1 diabetes, chronic illness, post-traumatic stress, illness-related PTSD
Procedia PDF Downloads 17772 Tangible Losses, Intangible Traumas: Re-envisioning Recovery Following the Lytton Creek Fire 2021 through Place Attachment Lens
Authors: Tugba Altin
Abstract:
In an era marked by pronounced climate change consequences, communities are observed to confront traumatic events that yield both tangible and intangible repercussions. Such events not only cause discernible damage to the landscape but also deeply affect the intangible aspects, including emotional distress and disruptions to cultural landscapes. The Lytton Creek Fire of 2021 serves as a case in point. Beyond the visible destruction, the less overt but profoundly impactful disturbance to place attachment (PA) is scrutinized. PA, representing the emotional and cognitive bonds individuals establish with their environments, is crucial for understanding how such events impact cultural identity and connection to the land. The study underscores the significance of addressing both tangible and intangible traumas for holistic community recovery. As communities renegotiate their affiliations with altered environments, the cultural landscape emerges as instrumental in shaping place-based identities. This renewed understanding is pivotal for reshaping adaptation planning. The research advocates for adaptation strategies rooted in the lived experiences and testimonies of the affected populations. By incorporating both the tangible and intangible facets of trauma, planning efforts are suggested to be more culturally attuned and emotionally insightful, fostering true resonance with the affected communities. Through such a comprehensive lens, this study contributes enriching the climate change discourse, emphasizing the intertwined nature of tangible recovery and the imperative of emotional and cultural healing after environmental disasters. Following the pronounced aftermath of the Lytton Creek Fire in 2021, research aims to deeply understand its impact on place attachment (PA), encompassing the emotional and cognitive bonds individuals form with their environments. The interpretive phenomenological approach, enriched by a hermeneutic framework, is adopted, emphasizing the experiences of the Lytton community and co-researchers. Phenomenology informed the understanding of 'place' as the focal point of attachment, providing insights into its formation and evolution after traumatic events. Data collection departs from conventional methods. Instead of traditional interviews, walking audio sessions and photo elicitation methods are utilized. These allow co-researchers to immerse themselves in the environment, re-experience, and articulate memories and feelings in real-time. Walking audio facilitates reflections on spatial narratives post-trauma, while photo voices captured intangible emotions, enabling the visualization of place-based experiences. The analysis is collaborative, ensuring the co-researchers' experiences and interpretations are central. Emphasizing their agency in knowledge production, the process is rigorous, facilitated by the harmonious blend of interpretive phenomenology and hermeneutic insights. The findings underscore the need for adaptation and recovery efforts to address emotional traumas alongside tangible damages. By exploring PA post-disaster, the research not only fills a significant gap but advocates for an inclusive approach to community recovery. Furthermore, the participatory methodologies employed challenge traditional research paradigms, heralding potential shifts in qualitative research norms.Keywords: wildfire recovery, place attachment, trauma recovery, cultural landscape, visual methodologies
Procedia PDF Downloads 9571 Advances and Challenges in Assessing Students’ Learning Competencies in 21st Century Higher Education
Authors: O. Zlatkin-Troitschanskaia, J. Fischer, C. Lautenbach, H. A. Pant
Abstract:
In 21st century higher education (HE), the diversity among students has increased in recent years due to the internationalization and higher mobility. Offering and providing equal and fair opportunities based on students’ individual skills and abilities instead of their social or cultural background is one of the major aims of HE. In this context, valid, objective and transparent assessments of students’ preconditions and academic competencies in HE are required. However, as analyses of the current states of research and practice show, a substantial research gap on assessment practices in HE still exists, calling for the development of effective solutions. These demands lead to significant conceptual and methodological challenges. Funded by the German Federal Ministry of Education and Research, the research program 'Modeling and Measuring Competencies in Higher Education – Validation and Methodological Challenges' (KoKoHs) focusses on addressing these challenges in HE assessment practice by modeling and validating objective test instruments. Including 16 cross-university collaborative projects, the German-wide research program contributes to bridging the research gap in current assessment research and practice by concentrating on practical and policy-related challenges of assessment in HE. In this paper, we present a differentiated overview of existing assessments of HE at the national and international level. Based on the state of research, we describe the theoretical and conceptual framework of the KoKoHs Program as well as results of the validation studies, including their key outcomes. More precisely, this includes an insight into more than 40 developed assessments covering a broad range of transparent and objective methods for validly measuring domain-specific and generic knowledge and skills for five major study areas (Economics, Social Science, Teacher Education, Medicine and Psychology). Computer-, video- and simulation-based instruments have been applied and validated to measure over 20,000 students at the beginning, middle and end of their (bachelor and master) studies at more than 300 HE institutions throughout Germany or during their practical training phase, traineeship or occupation. Focussing on the validity of the assessments, all test instruments have been analyzed comprehensively, using a broad range of methods and observing the validity criteria of the Standards for Psychological and Educational Testing developed by the American Educational Research Association, the American Economic Association and the National Council on Measurement. The results of the developed assessments presented in this paper, provide valuable outcomes to predict students’ skills and abilities at the beginning and the end of their studies as well as their learning development and performance. This allows for a differentiated view of the diversity among students. Based on the given research results practical implications and recommendations are formulated. In particular, appropriate and effective learning opportunities for students can be created to support the learning development of students, promote their individual potential and reduce knowledge and skill gaps. Overall, the presented research on competency assessment is highly relevant to national and international HE practice.Keywords: 21st century skills, academic competencies, innovative assessments, KoKoHs
Procedia PDF Downloads 14270 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming
Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero
Abstract:
Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up
Procedia PDF Downloads 24469 International Indigenous Employment Empirical Research: A Community-Based Participatory Research Content Analysis
Authors: Melanie Grier, Adam Murry
Abstract:
Objective: Worldwide, Indigenous Peoples experience underemployment and poverty at disproportionately higher rates than non-Indigenous people, despite similar rates of employment seeking. Euro-colonial conquest and genocidal assimilation policies are implicated as perpetuating poverty, which research consistently links to health and wellbeing disparities. Many of the contributors to poverty, such as inadequate income and lack of access to medical care, can be directly or indirectly linked to underemployment. Calls have been made to prioritize Indigenous perspectives in Industrial-Organizational (I/O) psychology research, yet the literature on Indigenous employment remains scarce. What does exist is disciplinarily diverse, topically scattered, and lacking evidence of community-based participatory research (CBPR) practices, a research project approach which prioritizes community leadership, partnership, and betterment and reduces the potential for harm. Due to the harmful colonial legacy of extractive scientific inquiry "on" rather than "with" Indigenous groups, Indigenous leaders and research funding agencies advocate for academic researchers to adopt reparative research methodologies such as CBPR to be used when studying issues pertaining to Indigenous Peoples or individuals. However, the frequency and consistency of CBPR implementation within scholarly discourse are unknown. Therefore, this project’s goal is two-fold: (1) to understand what comprises CBPR in Indigenous research and (2) to determine if CBPR has been historically used in Indigenous employment research. Method: Using a systematic literature review process, sixteen articles about CBPR use with Indigenous groups were selected, and content was analyzed to identify key components comprising CBPR usage. An Indigenous CBPR components framework was constructed and subsequently utilized to analyze the Indigenous employment empirical literature. A similar systematic literature review process was followed to search for relevant empirical articles on Indigenous employment. A total of 120 articles were identified in six global regions: Australia, New Zealand, Canada, America, the Pacific Islands, and Greenland/Norway. Each empirical study was procedurally examined and coded for criteria inclusion using content analysis directives. Results: Analysis revealed that, in total, CBPR elements were used 14% of the time in Indigenous employment research. Most studies (n=69; 58%) neglected to mention using any CBPR components, while just two studies discussed implementing all sixteen (2%). The most significant determinant of overall CBPR use was community member partnership (CP) in the research process. Studies from New Zealand were most likely to use CBPR components, followed by Canada, Australia, and America. While CBPR use did increase slowly over time, meaningful temporal trends were not found. Further, CBPR use did not directly correspond with the total number of topical articles published that year. Conclusions: Community-initiated and engaged research approaches must be better utilized in employment studies involving Indigenous Peoples. Future research efforts must be particularly attentive to community-driven objectives and research protocols, emphasizing specific areas of concern relevant to the field of I/O psychology, such as organizational support, recruitment, and selection.Keywords: community-based participatory research, content analysis, employment, indigenous research, international, reconciliation, recruitment, reparative research, selection, systematic literature review
Procedia PDF Downloads 7468 The Dark History of American Psychiatry: Racism and Ethical Provider Responsibility
Authors: Mary Katherine Hoth
Abstract:
Despite racial and ethnic disparities in American psychiatry being well-documented, there remains an apathetic attitude among nurses and providers within the field to engage in active antiracism and provide equitable, recovery-oriented care. It is insufficient to be a “colorblind” nurse or provider and state that call care provided is identical for every patient. Maintaining an attitude of “colorblindness” perpetuates the racism prevalent throughout healthcare and leads to negative patient outcomes. The purpose of this literature review is to highlight the how the historical beginnings of psychiatry have evolved into the disparities seen in today’s practice, as well as to provide some insight on methods that providers and nurses can employ to actively participate in challenging these racial disparities. Background The application of psychiatric medicine to White people versus Black, Indigenous, and other People of Color has been distinctly different as a direct result of chattel slavery and the development of pseudoscience “diagnoses” in the 19th century. This weaponization of the mental health of Black people continues to this day. Population The populations discussed are Black, Indigenous, and other People of Color, with a primary focus on Black people’s experiences with their mental health and the field of psychiatry. Methods A literature review was conducted using CINAHL, EBSCO, MEDLINE, and PubMed databases with the following terms: psychiatry, mental health, racism, substance use, suicide, trauma-informed care, disparities and recovery-oriented care. Articles were further filtered based on meeting the criteria of peer-reviewed, full-text availability, written in English, and published between 2018 and 2023. Findings Black patients are more likely to be diagnosed with psychotic disorders and prescribed antipsychotic medications compared to White patients who were more often diagnosed with mood disorders and prescribed antidepressants. This same disparity is also seen in children and adolescents, where Black children are more likely to be diagnosed with behavior problems such as Oppositional Defiant Disorder (ODD) and White children with the same presentation are more likely to be diagnosed with Attention Hyperactivity Disorder. Medications advertisements for antipsychotics like Haldol as recent as 1974 portrayed a Black man, labeled as “agitated” and “aggressive”, a trope we still see today in police violence cases. The majority of nursing and medical school programs do not provide education on racism and how to actively combat it in practice, leaving many healthcare professionals acutely uneducated and unaware of their own biases and racism, as well as structural and institutional racism. Conclusions Racism will continue to grow wherever it is given time, space, and energy. Providers and nurses have an ethical obligation to educate themselves, actively deconstruct their personal racism and bias, and continuously engage in active antiracism by dismantling racism wherever it is encountered, be it structural, institutional, or scientific racism. Agents of change at the patient care level not only improve the outcomes of Black patients, but it will also lead the way in ensuring Black, Indigenous, and other People of Color are included in research of methods and medications in psychiatry in the future.Keywords: disparities, psychiatry, racism, recovery-oriented care, trauma-informed care
Procedia PDF Downloads 13067 Assessing Image Quality in Mobile Radiography: A Phantom-Based Evaluation of a New Lightweight Mobile X-Ray Equipment
Authors: May Bazzi, Shafik Tokmaj, Younes Saberi, Mats Geijer, Tony Jurkiewicz, Patrik Sund, Anna Bjällmark
Abstract:
Mobile radiography, employing portable X-ray equipment, has become a routine procedure within hospital settings, with chest X-rays in intensive care units standing out as the most prevalent mobile X-ray examinations. This approach is not limited to hospitals alone, as it extends its benefits to imaging patients in various settings, particularly those too frail to be transported, such as elderly care residents in nursing homes. Moreover, the utility of mobile X-ray isn't confined solely to traditional healthcare recipients; it has proven to be a valuable resource for vulnerable populations, including the homeless, drug users, asylum seekers, and patients with multiple co-morbidities. Mobile X-rays reduce patient stress, minimize costly hospitalizations, and offer cost-effective imaging. While studies confirm its reliability, further research is needed, especially regarding image quality. Recent advancements in lightweight equipment with enhanced battery and detector technology provide the potential for nearly handheld radiography. The main aim of this study was to evaluate a new lightweight mobile X-ray system with two different detectors and compare the image quality with a modern stationary system. Methods: A total of 74 images of the chest (chest anterior-posterior (AP) views and chest lateral views) and pelvic/hip region (AP pelvis views, hip AP views, and hip cross-table lateral views) were acquired on a whole-body phantom (Kyotokagaku, Japan), utilizing varying image parameters. These images were obtained using a stationary system - 18 images (Mediel, Sweden), a mobile X-ray system with a second-generation detector - 28 images (FDR D-EVO II; Fujifilm, Japan) and a mobile X-ray system with a third-generation detector - 28 images (FDR D-EVO III; Fujifilm, Japan). Image quality was assessed by visual grading analysis (VGA), which is a method to measure image quality by assessing the visibility and accurate reproduction of anatomical structures within the images. A total of 33 image criteria were used in the analysis. A panel of two experienced radiologists, two experienced radiographers, and two final-term radiographer students evaluated the image quality on a 5-grade ordinal scale using the software Viewdex 3.0 (Viewer for Digital Evaluation of X-ray images, Sweden). Data were analyzed using visual grading characteristics analysis. The dose was measured by the dose-area product (DAP) reported by the respective systems. Results: The mobile X-ray equipment (both detectors) showed significantly better image quality than the stationary equipment for the pelvis, hip AP and hip cross-table lateral images with AUCVGA-values ranging from 0.64-0.92, while chest images showed mixed results. The number of images rated as having sufficient quality for diagnostic use was significantly higher for mobile X-ray generation 2 and 3 compared with the stationary X-ray system. The DAP values were higher for the stationary compared to the mobile system. Conclusions: The new lightweight radiographic equipment had an image quality at least as good as a fixed system at a lower radiation dose. Future studies should focus on clinical images and consider radiographers' viewpoints for a comprehensive assessment.Keywords: mobile x-ray, visual grading analysis, radiographer, radiation dose
Procedia PDF Downloads 6766 A Regulator's Assessment of Consumer Risk When Evaluating a User Test for an Umbrella Brand Name in an over the Counter Medicine
Authors: A. Bhatt, C. Bassi, H. Farragher, J. Musk
Abstract:
Background: All medicines placed on the EU market are legally required to be accompanied by labeling and package leaflet, which provide comprehensive information, enabling its safe and appropriate use. Mock-ups with results of assessments using a target patient group must be submitted for a marketing authorisation application. Consumers need confidence in non-prescription, OTC medicines in order to manage their minor ailments and umbrella brands assist purchasing decisions by assisting easy identification within a particular therapeutic area. A number of regulatory agencies have risk management tools and guidelines to assist in developing umbrella brands for OTC medicines, however assessment and decision making is subjective and inconsistent. This study presents an evaluation in the UK following the US FDA warning concerning methaemoglobinaemia following 21 reported cases (11 children under 2 years) caused by OTC oral analgesics containing benzocaine. METHODS: A standard face to face, 25 structured task based user interview testing methodology using a standard questionnaire and rating scale in consumers aged 15-91 years, was conducted independently between June and October 2015 in their homes. Whether individuals could discriminate between the labelling, safety information and warnings on cartons and PILs between 3 different OTC medicines packs with the same umbrella name was evaluated. Each pack was presented with differing information hierarchy using, different coloured cartons, containing the 3 different active ingredients, benzocaine (oromucosal spray) and two lozenges containing 2, 4, dichlorobenzyl alcohol, amylmetacresol and hexylresorcinol respectively (for the symptomatic relief of sore throat pain). The test was designed to determine whether warnings on the carton and leaflet were prominent, accessible to alert users that one product contained benzocaine, risk of methaemoglobinaemia, and refer to the leaflet for the signs of the condition and what to do should this occur. Results: Two consumers did not locate the warnings on the side of the pack, eventually found them on the back and two suggestions to further improve accessibility of the methaemoglobinaemia warning. Using a gold pack design for the oromucosal spray, all consumers could differentiate between the 3 drugs, minimum age particulars, pharmaceutical form and the risk factor methaemoglobinaemia. The warnings for benzocaine were deemed to be clear or very clear; appearance of the 3 packs were either very well differentiated or quite well differentiated. The PIL test passed on all criteria. All consumers could use the product correctly, identify risk factors ensuring the critical information necessary for the safe use was legible and easily accessible so that confusion and errors were minimised. Conclusion: Patients with known methaemoglobinaemia are likely to be vigilant in checking for benzocaine containing products, despite similar umbrella brand names across a range of active ingredients. Despite these findings, the package design and spray format were not deemed to be sufficient to mitigate potential safety risks associated with differences in target populations and contraindications when submitted to the Regulatory Agency. Although risk management tools are increasingly being used by agencies to assist in providing objective assurance of package safety, further transparency, reduction in subjectivity and proportionate risk should be demonstrated.Keywords: labelling, OTC, risk, user testing
Procedia PDF Downloads 30965 Emotional State and Cognitive Workload during a Flight Simulation: Heart Rate Study
Authors: Damien Mouratille, Antonio R. Hidalgo-Muñoz, Nadine Matton, Yves Rouillard, Mickael Causse, Radouane El Yagoubi
Abstract:
Background: The monitoring of the physiological activity related to mental workload (MW) on pilots will be useful to improve aviation safety by anticipating human performance degradation. The electrocardiogram (ECG) can reveal MW fluctuations due to either cognitive workload or/and emotional state since this measure exhibits autonomic nervous system modulations. Arguably, heart rate (HR) is one of its most intuitive and reliable parameters. It would be particularly interesting to analyze the interaction between cognitive requirements and emotion in ecologic sets such as a flight simulator. This study aims to explore by means of HR the relation between cognitive demands and emotional activation. Presumably, the effects of cognition and emotion overloads are not necessarily cumulative. Methodology: Eight healthy volunteers in possession of the Private Pilot License were recruited (male; 20.8±3.2 years). ECG signal was recorded along the whole experiment by placing two electrodes on the clavicle and left pectoral of the participants. The HR was computed within 4 minutes segments. NASA-TLX and Big Five inventories were used to assess subjective workload and to consider the influence of individual personality differences. The experiment consisted in completing two dual-tasks of approximately 30 minutes of duration into a flight simulator AL50. Each dual-task required the simultaneous accomplishment of both a pre-established flight plan and an additional task based on target stimulus discrimination inserted between Air Traffic Control instructions. This secondary task allowed us to vary the cognitive workload from low (LC) to high (HC) levels, by combining auditory and visual numerical stimuli to respond to meeting specific criteria. Regarding emotional condition, the two dual-tasks were designed to assure analogous difficulty in terms of solicited cognitive demands. The former was realized by the pilot alone, i.e. Low Arousal (LA) condition. In contrast, the latter generates a high arousal (HA), since the pilot was supervised by two evaluators, filmed and involved into a mock competition with the rest of the participants. Results: Performance for the secondary task showed significant faster reaction times (RT) for HA compared to LA condition (p=.003). Moreover, faster RT was found for LC compared to HC (p < .001) condition. No interaction was found. Concerning HR measure, despite the lack of main effects an interaction between emotion and cognition is evidenced (p=.028). Post hoc analysis showed smaller HR for HA compared to LA condition only for LC (p=.049). Conclusion. The control of an aircraft is a very complex task including strong cognitive demands and depends on the emotional state of pilots. According to the behavioral data, the experimental set has permitted to generate satisfactorily different emotional and cognitive levels. As suggested by the interaction found in HR measure, these two factors do not seem to have a cumulative impact on the sympathetic nervous system. Apparently, low cognitive workload makes pilots more sensitive to emotional variations. These results hint the independency between data processing and emotional regulation. Further physiological data are necessary to confirm and disentangle this relation. This procedure may be useful for monitoring objectively pilot’s mental workload.Keywords: cognitive demands, emotion, flight simulator, heart rate, mental workload
Procedia PDF Downloads 27564 Impact of Primary Care Telemedicine Consultations On Health Care Resource Utilisation: A Systematic Review
Authors: Anastasia Constantinou, Stephen Morris
Abstract:
Background: The adoption of synchronous and asynchronous telemedicine modalities for primary care consultations has exponentially increased since the COVID-19 pandemic. However, there is limited understanding of how virtual consultations influence healthcare resource utilization and other quality measures including safety, timeliness, efficiency, patient and provider satisfaction, cost-effectiveness and environmental impact. Aim: Quantify the rate of follow-up visits, emergency department visits, hospitalizations, request for investigations and prescriptions and comment on the effect on different quality measures associated with different telemedicine modalities used for primary care services and primary care referrals to secondary care Design and setting: Systematic review in primary care Methods: A systematic search was carried out across three databases (Medline, PubMed and Scopus) between August and November 2023, using terms related to telemedicine, general practice, electronic referrals, follow-up, use and efficiency and supported by citation searching. This was followed by screening according to pre-defined criteria, data extraction and critical appraisal. Narrative synthesis and metanalysis of quantitative data was used to summarize findings. Results: The search identified 2230 studies; 50 studies are included in this review. There was a prevalence of asynchronous modalities in both primary care services (68%) and referrals from primary care to secondary care (83%), and most of the study participants were females (63.3%), with mean age of 48.2. The average follow-up for virtual consultations in primary care was 28.4% (eVisits: 36.8%, secure messages 18.7%, videoconference 23.5%) with no significant difference between them or F2F consultations. There was an average annual reduction of primary care visits by 0.09/patient, an increase in telephone visits by 0.20/patient, an increase in ED encounters by 0.011/patient, an increase in hospitalizations by 0.02/patient and an increase in out of hours visits by 0.019/patient. Laboratory testing was requested on average for 10.9% of telemedicine patients, imaging or procedures for 5.6% and prescriptions for 58.7% of patients. When looking at referrals to secondary care, on average 36.7% of virtual referrals required follow-up visit, with the average rate of follow-up for electronic referrals being higher than for videoconferencing (39.2% vs 23%, p=0.167). Technical failures were reported on average for 1.4% of virtual consultations to primary care. When using carbon footprint estimates, we calculate that the use of telemedicine in primary care services can potentially provide a net decrease in carbon footprint by 0.592kgCO2/patient/year. When follow-up rates are taken into account, we estimate that virtual consultations reduce carbon footprint for primary care services by 2.3 times, and for secondary care referrals by 2.2 times. No major concerns regarding quality of care, or patient satisfaction were identified. 5/7 studies that addressed cost-effectiveness, reported increased savings. Conclusions: Telemedicine provides quality, cost-effective, and environmentally sustainable care for patients in primary care with inconclusive evidence regarding the rates of subsequent healthcare utilization. The evidence is limited by heterogeneous, small-scale studies and lack of prospective comparative studies. Further research to identify the most appropriate telemedicine modality for different patient populations, clinical presentations, service provision (e.g. used to follow-up patients instead of initial diagnosis) as well as further education for patients and providers alike on how to make best use of this service is expected to improve outcomes and influence practice.Keywords: telemedicine, healthcare utilisation, digital interventions, environmental impact, sustainable healthcare
Procedia PDF Downloads 5863 The Path to Ruthium: Insights into the Creation of a New Element
Authors: Goodluck Akaoma Ordu
Abstract:
Ruthium (Rth) represents a theoretical superheavy element with an atomic number of 119, proposed within the context of advanced materials science and nuclear physics. The conceptualization of Rth involves theoretical frameworks that anticipate its atomic structure, including a hypothesized stable isotope, Rth-320, characterized by 119 protons and 201 neutrons. The synthesis of Ruthium (Rth) hinges on intricate nuclear fusion processes conducted in state-of-the-art particle accelerators, notably utilizing Calcium-48 (Ca-48) as a projectile nucleus and Einsteinium-253 (Es-253) as a target nucleus. These experiments aim to induce fusion reactions that yield Ruthium isotopes, such as Rth-301, accompanied by neutron emission. Theoretical predictions outline various physical and chemical properties attributed to Ruthium (Rth). It is envisaged to possess a high density, estimated at around 25 g/cm³, with melting and boiling points anticipated to be exceptionally high, approximately 4000 K and 6000 K, respectively. Chemical studies suggest potential oxidation states of +2, +3, and +4, indicating a versatile reactivity, particularly with halogens and chalcogens. The atomic structure of Ruthium (Rth) is postulated to feature an electron configuration of [Rn] 5f^14 6d^10 7s^2 7p^2, reflecting its position in the periodic table as a superheavy element. However, the creation and study of superheavy elements like Ruthium (Rth) pose significant challenges. These elements typically exhibit very short half-lives, posing difficulties in their stabilization and detection. Research efforts are focused on identifying the most stable isotopes of Ruthium (Rth) and developing advanced detection methodologies to confirm their existence and properties. Specialized detectors are essential in observing decay patterns unique to Ruthium (Rth), such as alpha decay or fission signatures, which serve as key indicators of its presence and characteristics. The potential applications of Ruthium (Rth) span across diverse technological domains, promising innovations in energy production, material strength enhancement, and sensor technology. Incorporating Ruthium (Rth) into advanced energy systems, such as the Arc Reactor concept, could potentially amplify energy output efficiencies. Similarly, integrating Ruthium (Rth) into structural materials, exemplified by projects like the NanoArc gauntlet, could bolster mechanical properties and resilience. Furthermore, Ruthium (Rth)--based sensors hold promise for achieving heightened sensitivity and performance in various sensing applications. Looking ahead, the study of Ruthium (Rth) represents a frontier in both fundamental science and applied research. It underscores the quest to expand the periodic table and explore the limits of atomic stability and reactivity. Future research directions aim to delve deeper into Ruthium (Rth)'s atomic properties under varying conditions, paving the way for innovations in nanotechnology, quantum materials, and beyond. The synthesis and characterization of Ruthium (Rth) stand as a testament to human ingenuity and technological advancement, pushing the boundaries of scientific understanding and engineering capabilities. In conclusion, Ruthium (Rth) embodies the intersection of theoretical speculation and experimental pursuit in the realm of superheavy elements. It symbolizes the relentless pursuit of scientific excellence and the potential for transformative technological breakthroughs. As research continues to unravel the mysteries of Ruthium (Rth), it holds the promise of reshaping materials science and opening new frontiers in technological innovation.Keywords: superheavy element, nuclear fusion, bombardment, particle accelerator, nuclear physics, particle physics
Procedia PDF Downloads 3962 Pre-Cancerigene Injuries Related to Human Papillomavirus: Importance of Cervicography as a Complementary Diagnosis Method
Authors: Denise De Fátima Fernandes Barbosa, Tyane Mayara Ferreira Oliveira, Diego Jorge Maia Lima, Paula Renata Amorim Lessa, Ana Karina Bezerra Pinheiro, Cintia Gondim Pereira Calou, Glauberto Da Silva Quirino, Hellen Lívia Oliveira Catunda, Tatiana Gomes Guedes, Nicolau Da Costa
Abstract:
The aim of this study is to evaluate the use of Digital Cervicography (DC) in the diagnosis of precancerous lesions related to Human Papillomavirus (HPV). Cross-sectional study with a quantitative approach, of evaluative type, held in a health unit linked to the Pro Dean of Extension of the Federal University of Ceará, in the period of July to August 2015 with a sample of 33 women. Data collecting was conducted through interviews with enforcement tool. Franco (2005) standardized the technique used for DC. Polymerase Chain Reaction (PCR) was performed to identify high-risk HPV genotypes. DC were evaluated and classified by 3 judges. The results of DC and PCR were classified as positive, negative or inconclusive. The data of the collecting instruments were compiled and analyzed by the software Statistical Package for Social Sciences (SPSS) with descriptive statistics and cross-references. Sociodemographic, sexual and reproductive variables were analyzed through absolute frequencies (N) and their respective percentage (%). Kappa coefficient (κ) was applied to determine the existence of agreement between the DC of reports among evaluators with PCR and also among the judges about the DC results. The Pearson's chi-square test was used for analysis of sociodemographic, sexual and reproductive variables with the PCR reports. It was considered statistically significant (p<0.05). Ethical aspects of research involving human beings were respected, according to 466/2012 Resolution. Regarding the socio-demographic profile, the most prevalent ages and equally were those belonging to the groups 21-30 and 41-50 years old (24.2%). The brown color was reported in excess (84.8%) and 96.9% out of them had completed primary and secondary school or studying. 51.5% were married, 72.7% Catholic, 54.5% employed and 48.5% with income between one and two minimum wages. As for the sexual and reproductive characteristics, prevailed heterosexual (93.9%) who did not use condoms during sexual intercourse (72.7%). 51.5% had a previous history of Sexually Transmitted Infection (STI), and HPV the most prevalent STI (76.5%). 57.6% did not use contraception, 78.8% underwent examination Cancer Prevention Uterus (PCCU) with shorter time interval or equal to one year, 72.7% had no cases of Cervical Cancer in the family, 63.6% were multiparous and 97% were not vaccinated against HPV. DC identified good level of agreement between raters (κ=0.542), had a specificity of 77.8% and sensitivity of 25% when compared their results with PCR. Only the variable race showed a statistically significant association with CRP (p=0.042). DC had 100% acceptance amongst women in the sample, revealing the possibility of other experiments in using this method so that it proves as a viable technique. The DC positivity criteria were developed by nurses and these professionals also perform PCCU in Brazil, which means that DC can be an important complementary diagnostic method for the appreciation of these professional’s quality of examinations.Keywords: gynecological examination, human papillomavirus, nursing, papillomavirus infections, uterine lasmsneop
Procedia PDF Downloads 30261 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 13960 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task
Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes
Abstract:
For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.Keywords: Alzheimer's disease, keystroke logging, matching, writing process
Procedia PDF Downloads 36659 Early Predictive Signs for Kasai Procedure Success
Authors: Medan Isaeva, Anna Degtyareva
Abstract:
Context: Biliary atresia is a common reason for liver transplants in children, and the Kasai procedure can potentially be successful in avoiding the need for transplantation. However, it is important to identify factors that influence surgical outcomes in order to optimize treatment and improve patient outcomes. Research aim: The aim of this study was to develop prognostic models to assess the outcomes of the Kasai procedure in children with biliary atresia. Methodology: This retrospective study analyzed data from 166 children with biliary atresia who underwent the Kasai procedure between 2002 and 2021. The effectiveness of the operation was assessed based on specific criteria, including post-operative stool color, jaundice reduction, and bilirubin levels. The study involved a comparative analysis of various parameters, such as gestational age, birth weight, age at operation, physical development, liver and spleen sizes, and laboratory values including bilirubin, ALT, AST, and others, measured pre- and post-operation. Ultrasonographic evaluations were also conducted pre-operation, assessing the hepatobiliary system and related quantitative parameters. The study was carried out by two experienced specialists in pediatric hepatology. Comparative analysis and multifactorial logistic regression were used as the primary statistical methods. Findings: The study identified several statistically significant predictors of a successful Kasai procedure, including the presence of the gallbladder and levels of cholesterol and direct bilirubin post-operation. A detectable gallbladder was associated with a higher probability of surgical success, while elevated post-operative cholesterol and direct bilirubin levels were indicative of a reduced chance of positive outcomes. Theoretical importance: The findings of this study contribute to the optimization of treatment strategies for children with biliary atresia undergoing the Kasai procedure. By identifying early predictive signs of success, clinicians can modify treatment plans and manage patient care more effectively and proactively. Data collection and analysis procedures: Data for this analysis were obtained from the health records of patients who received the Kasai procedure. Comparative analysis and multifactorial logistic regression were employed to analyze the data and identify significant predictors. Question addressed: The study addressed the question of identifying predictive factors for the success of the Kasai procedure in children with biliary atresia. Conclusion: The developed prognostic models serve as valuable tools for early detection of patients who are less likely to benefit from the Kasai procedure. This enables clinicians to modify treatment plans and manage patient care more effectively and proactively. Potential limitations of the study: The study has several limitations. Its retrospective nature may introduce biases and inconsistencies in data collection. Being single centered, the results might not be generalizable to wider populations due to variations in surgical and postoperative practices. Also, other potential influencing factors beyond the clinical, laboratory, and ultrasonographic parameters considered in this study were not explored, which could affect the outcomes of the Kasai operation. Future studies could benefit from including a broader range of factors.Keywords: biliary atresia, kasai operation, prognostic model, native liver survival
Procedia PDF Downloads 5658 3D Non-Linear Analyses by Using Finite Element Method about the Prediction of the Cracking in Post-Tensioned Dapped-End Beams
Authors: Jatziri Y. Moreno-Martínez, Arturo Galván, Israel Enrique Herrera Díaz, José Ramón Gasca Tirado
Abstract:
In recent years, for the elevated viaducts in Mexico City, a construction system based on precast/pre-stressed concrete elements has been used, in which the bridge girders are divided in two parts by imposing a hinged support in sections where the bending moments that are originated by the gravity loads in a continuous beam are minimal. Precast concrete girders with dapped ends are a representative sample of a behavior that has complex configurations of stresses that make them more vulnerable to cracking due to flexure–shear interaction. The design procedures for ends of the dapped girders are well established and are based primarily on experimental tests performed for different configurations of reinforcement. The critical failure modes that can govern the design have been identified, and for each of them, the methods for computing the reinforcing steel that is needed to achieve adequate safety against failure have been proposed. Nevertheless, the design recommendations do not include procedures for controlling diagonal cracking at the entrant corner under service loading. These cracks could cause water penetration and degradation because of the corrosion of the steel reinforcement. The lack of visual access to the area makes it difficult to detect this damage and take timely corrective actions. Three-dimensional non-linear numerical models based on Finite Element Method to study the cracking at the entrant corner of dapped-end beams were performed using the software package ANSYS v. 11.0. The cracking was numerically simulated by using the smeared crack approach. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The longitudinal post-tension was modeled using LINK8 elements with multilinear isotropic hardening behavior using von Misses plasticity. The reinforcement was introduced with smeared approach. The numerical models were calibrated using experimental tests carried out in “Instituto de Ingeniería, Universidad Nacional Autónoma de México”. In these numerical models the characteristics of the specimens were considered: typical solution based on vertical stirrups (hangers) and on vertical and horizontal hoops with a post-tensioned steel which contributed to a 74% of the flexural resistance. The post-tension is given by four steel wires with a 5/8’’ (16 mm) diameter. Each wire was tensioned to 147 kN and induced an average compressive stress of 4.90 MPa on the concrete section of the dapped end. The loading protocol consisted on applying symmetrical loading to reach the service load (180 kN). Due to the good correlation between experimental and numerical models some additional numerical models were proposed by considering different percentages of post-tension in order to find out how much it influences in the appearance of the cracking in the reentrant corner of the dapped-end beams. It was concluded that the increasing of percentage of post-tension decreases the displacements and the cracking in the reentrant corner takes longer to appear. The authors acknowledge at “Universidad de Guanajuato, Campus Celaya-Salvatierra” and the financial support of PRODEP-SEP (UGTO-PTC-460) of the Mexican government. The first author acknowledges at “Instituto de Ingeniería, Universidad Nacional Autónoma de México”.Keywords: concrete dapped-end beams, cracking control, finite element analysis, postension
Procedia PDF Downloads 226