Search results for: multiple subordinated modeling
520 Design of Smart Catheter for Vascular Applications Using Optical Fiber Sensor
Authors: Lamiek Abraham, Xinli Du, Yohan Noh, Polin Hsu, Tingting Wu, Tom Logan, Ifan Yen
Abstract:
In the field of minimally invasive, smart medical instruments such as catheters and guidewires are typically used at a remote distance to gain access to the diseased artery, often negotiating tortuous, complex, and diseased vessels in the process. Three optical fiber sensors with a diameter of 1.5mm each that are 120° apart from each other is proposed to be mounted into a catheter-based pump device with a diameter of 10mm. These sensors are configured to solve the challenges surgeons face during insertion through curvy major vessels such as the aortic arch. Moreover, these sensors deal with providing information on rubbing the walls and shape sensing. This study presents an experimental and mathematical models of the optical fiber sensors with 2 degrees of freedom. There are two eight gear-shaped tubes made up of 3D printed thermoplastic Polyurethane (TPU) material that are connected. The optical fiber sensors are mounted inside the first tube for protection from external light and used TPU material as a prototype for a catheter. The second tube is used as a flat reflection for the light intensity modulation-based optical fiber sensors. The first tube is attached to the linear guide for insertion and withdrawal purposes and can manually turn it 45° by manipulating the tube gear. A 3D hard material phantom was developed that mimics the aortic arch anatomy structure in which the test was carried out. During the insertion of the sensors into the 3D phantom, datasets are obtained in terms of voltage, distance, and position of the sensors. These datasets reflect the characteristics of light intensity modulation of the optical fiber sensors with a plane project of the aortic arch structure shape. Mathematical modeling of the light intensity was carried out based on the projection plane and experiment set-up. The performance of the system was evaluated in terms of its accuracy in navigating through the curvature and information on the position of the sensors by investigating 40 single insertions of the sensors into the 3D phantom. The experiment demonstrated that the sensors were effectively steered through the 3D phantom curvature and to desired target references in all 2 degrees of freedom. The performance of the sensors echoes the reflectance of light theory, where the smaller the radius of curvature, the more of the shining LED lights are reflected and received by the photodiode. A mathematical model results are in good agreement with the experiment result and the operation principle of the light intensity modulation of the optical fiber sensors. A prototype of a catheter using TPU material with three optical fiber sensors mounted inside has been developed that is capable of navigating through the different radius of curvature with 2 degrees of freedom. The proposed system supports operators with pre-scan data to make maneuverability and bendability through curvy major vessels easier, accurate, and safe. The mathematical modelling accurately fits the experiment result.Keywords: Intensity modulated optical fiber sensor, mathematical model, plane projection, shape sensing.
Procedia PDF Downloads 254519 Analytical and Numerical Modeling of Strongly Rotating Rarefied Gas Flows
Authors: S. Pradhan, V. Kumaran
Abstract:
Centrifugal gas separation processes effect separation by utilizing the difference in the mole fraction in a high speed rotating cylinder caused by the difference in molecular mass, and consequently the centrifugal force density. These have been widely used in isotope separation because chemical separation methods cannot be used to separate isotopes of the same chemical species. More recently, centrifugal separation has also been explored for the separation of gases such as carbon dioxide and methane. The efficiency of separation is critically dependent on the secondary flow generated due to temperature gradients at the cylinder wall or due to inserts, and it is important to formulate accurate models for this secondary flow. The widely used Onsager model for secondary flow is restricted to very long cylinders where the length is large compared to the diameter, the limit of high stratification parameter, where the gas is restricted to a thin layer near the wall of the cylinder, and it assumes that there is no mass difference in the two species while calculating the secondary flow. There are two objectives of the present analysis of the rarefied gas flow in a rotating cylinder. The first is to remove the restriction of high stratification parameter, and to generalize the solutions to low rotation speeds where the stratification parameter may be O (1), and to apply for dissimilar gases considering the difference in molecular mass of the two species. Secondly, we would like to compare the predictions with molecular simulations based on the direct simulation Monte Carlo (DSMC) method for rarefied gas flows, in order to quantify the errors resulting from the approximations at different aspect ratios, Reynolds number and stratification parameter. In this study, we have obtained analytical and numerical solutions for the secondary flows generated at the cylinder curved surface and at the end-caps due to linear wall temperature gradient and external gas inflow/outflow at the axis of the cylinder. The effect of sources of mass, momentum and energy within the flow domain are also analyzed. The results of the analytical solutions are compared with the results of DSMC simulations for three types of forcing, a wall temperature gradient, inflow/outflow of gas along the axis, and mass/momentum input due to inserts within the flow. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. The commonly used diffuse reflection boundary conditions at solid walls in DSMC simulations result in a non-zero slip velocity as well as a temperature slip (gas temperature at the wall is different from wall temperature). These have to be incorporated in the analysis in order to make quantitative predictions. In the case of mass/momentum/energy sources within the flow, it is necessary to ensure that the homogeneous boundary conditions are accurately satisfied in the simulations. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 10 %, even when the stratification parameter is as low as 0.707, the Reynolds number is as low as 100 and the aspect ratio (length/diameter) of the cylinder is as low as 2, and the secondary flow velocity is as high as 0.2 times the maximum base flow velocity.Keywords: rotating flows, generalized onsager and carrier-Maslen model, DSMC simulations, rarefied gas flow
Procedia PDF Downloads 399518 Identification of Three Strategies to Enhance University Students’ Professional Identity, Using Hierarchical Regression Analysis
Authors: Alba Barbara-i-Molinero, Rosalia Cascon-Pereira, Ana Beatriz Hernandez
Abstract:
Students’ transitions from high school to the university have been challenged by the lack of continuity between both contexts. This mismatch directly affects students by generating feelings of anxiety and uncertainty, which increases the dropout rates and reduces students’ academic success. This discontinuity emanates because ‘transitions concern a restructuring of what the person does and who the person perceives him or herself to be’. Hence, identity becomes essential in these transitions. Generally, identity is the answer to questions such as who am I? or who are we? This is integrated by personal identity, and as many social identities as groups, the individual feels he/she is a part. A case in point to construct a social identity is the identification with a profession. For this reason, a way to lighten the generated tension during transitions is applying strategies orientated to enhance students’ professional identity in their point of entry to the higher education institution. That would create a sense of continuity between high school and higher education contexts, increasing their Professional Identity Strength. To develop the strategies oriented to enhance students Professional Identity, it is important to analyze what influences it. There exist several influencing factors that influence Professional Identity (e.g., professional status, the recommendation of family and peers, the academic environment, or the chosen bachelor degree). There is a gap in the literature analyzing the impact of these factors on more than one bachelor degree. In this regards, our study takes an additional step with the aim of evaluating the influence of several factors on Professional Identity using a cohort of university students from multiple degrees between the ages of 17-19 years. To do so, we used hierarchical regression analyses to assess the impact of the following factors: External Motivation Conditionals (EMC), Educational Experience Conditionals (EEC) and Personal Motivational Conditional (PMP). After conducting the analyses, we found that the assessed factors influenced students’ professional identity differently according to their bachelor degree and discipline. For example, PMC and EMC positively affected science students, while architecture, law and economics and engineering students were just influenced by PMC. Basing on that influences, we proposed three different strategies aimed to enhance students’ professional identity, in the short and long term. These strategies are: to enhance students’ professional identity before the incorporation to university through campuses and icebreaker activities; to apply recruitment strategies aimed to provide realistic information of the bachelor degree; and to incorporate different activities, such as in-vitro, in situ and self-directed activities aimed to enhance longitudinally students’ professional identity from the university. From these results, theoretical contributions and practical implications arise. First, we contribute to the literature by identifying which factors influence students from different bachelor degrees since there is still no evidence. And, second, using as a benchmark the obtained results, we contribute from a practical perspective, by proposing several alternative strategies to increase students’ professional identity strength aiming to lighten their transition from high school to higher education.Keywords: professional identity, higher education, educational strategies , students
Procedia PDF Downloads 145517 Measuring Enterprise Growth: Pitfalls and Implications
Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić
Abstract:
Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises
Procedia PDF Downloads 253516 For Whom Is Legal Aid: A Critical Analysis of the State-Funded Legal Aid in Criminal Cases in Tajikistan
Authors: Umeda Junaydova
Abstract:
Legal aid is a key element of access to justice. According to UN Principles and Guidelines on Access to Legal Aid in Criminal Justice Systems, state members bear the obligation to put in place accessible, effective, sustainable, and credible legal aid systems. Regarding this obligation, developing countries, such as Tajikistan, faced challenges in terms of financing this system. Thus, many developed nations have launched rule-of-law programs to support these states and ensure access to justice for all. Following independence from the Soviet Union, Tajikistan committed to introducing the rule of law and providing access to justice. This newly established country was weak, and the sudden outbreak of civil war aggravated the situation even more. The country needed external support and opened its door to attract foreign donors to assist it in its way to development. In 2015, Tajikistan, with the financial support of development partners, was able to establish a state-funded legal aid system that provides legal assistance to vulnerable and marginalized populations, including in criminal cases. In the beginning, almost the whole system was financed from donor funds; by that time, the contribution of the government gradually increased, and currently, it covers 80% of the total budget. All these governments' actions toward ensuring access to criminal legal aid for disadvantaged groups look promising; however, the reality is completely different. Currently, not all disadvantaged people are covered by these services, and their cases are most of the time considered without appropriate defense, which leads to violation of fundamental human rights. This research presents a comprehensive exploration of the interplay between donor assistance and the effectiveness of legal aid services in Tajikistan, with a specific focus on criminal cases involving vulnerable groups, such as women and children. In the context of Tajikistan, this study addresses a pressing concern: despite substantial financial support from international donors, state-funded legal aid services often fall short of meeting the needs of poor and vulnerable populations. The study delves into the underlying complexities of this issue and examines the structural, operational, and systemic challenges faced by legal aid providers, shedding light on the factors contributing to the ineffectiveness of legal aid services. Furthermore, it seeks to identify the root causes of these issues, revealing the barriers that hinder the delivery of adequate legal aid services. The research adopts a socio-legal methodology to ensure an appropriate combination of multiple methodologies. The findings of this research hold significant implications for both policymakers and practitioners, offering insights into the enhancement of legal aid services and access to justice for disadvantaged and marginalized populations in Tajikistan. By addressing these pressing questions, this study aims to fill the gap in legal literature and contribute to the development of a more equitable and efficient legal aid system that better serves the needs of the most vulnerable members of society.Keywords: access to justice, legal aid, rule of law, rights for council
Procedia PDF Downloads 52515 Climate Change and Rural-Urban Migration in Brazilian Semiarid Region
Authors: Linda Márcia Mendes Delazeri, Dênis Antônio Da Cunha
Abstract:
Over the past few years, the evidence that human activities have altered the concentration of greenhouse gases in the atmosphere have become stronger, indicating that this accumulation is the most likely cause of climate change observed so far. The risks associated with climate change, although uncertain, have the potential to increase social vulnerability, exacerbating existing socioeconomic challenges. Developing countries are potentially the most affected by climate change, since they have less potential to adapt and are those most dependent on agricultural activities, one of the sectors in which the major negative impacts are expected. In Brazil, specifically, it is expected that the localities which form the semiarid region are among the most affected, due to existing irregularity in rainfall and high temperatures, in addition to economic and social factors endemic to the region. Given the strategic limitations to handle the environmental shocks caused by climate change, an alternative adopted in response to these shocks is migration. Understanding the specific features of migration flows, such as duration, destination and composition is essential to understand the impacts of migration on origin and destination locations and to develop appropriate policies. Thus, this study aims to examine whether climatic factors have contributed to rural-urban migration in semiarid municipalities in the recent past and how these migration flows will be affected by future scenarios of climate change. The study was based on microeconomic theory of utility maximization, in which, to decide to leave the countryside and move on to the urban area, the individual seeks to maximize its utility. Analytically, we estimated an econometric model using the modeling of Fixed Effects and the results confirmed the expectation that climate drivers are crucial for the occurrence of the rural-urban migration. Also, other drivers of the migration process, as economic, social and demographic factors were also important. Additionally, predictions about the rural-urban migration motivated by variations in temperature and precipitation in the climate change scenarios RCP 4.5 and 8.5 were made for the periods 2016-2035 and 2046-2065, defined by the Intergovernmental Panel on Climate Change (IPCC). The results indicate that there will be increased rural-urban migration in the semiarid region in both scenarios and in both periods. In general, the results of this study reinforce the need for formulations of public policies to avoid migration for climatic reasons, such as policies that give support to the productive activities generating income in rural areas. By providing greater incentives for family agriculture and expanding sources of credit for the farmer, it will have a better position to face climate adversities and to settle in rural areas. Ultimately, if migration becomes necessary, there must be the adoption of policies that seek an organized and planned development of urban areas, considering migration as an adaptation strategy to adverse climate effects. Thus, policies that act to absorb migrants in urban areas and ensure that they have access to basic services offered to the urban population would contribute to the social costs reduction of climate variability.Keywords: climate change, migration, rural productivity, semiarid region
Procedia PDF Downloads 352514 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory
Authors: Xiaochen Mu
Abstract:
Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.Keywords: data protection, property rights, intellectual property, Big data
Procedia PDF Downloads 41513 Comparison of a Capacitive Sensor Functionalized with Natural or Synthetic Receptors Selective towards Benzo(a)Pyrene
Authors: Natalia V. Beloglazova, Pieterjan Lenain, Martin Hedstrom, Dietmar Knopp, Sarah De Saeger
Abstract:
In recent years polycyclic aromatic hydrocarbons (PAHs), which represent a hazard to humans and entire ecosystem, have been receiving an increased interest due to their mutagenic, carcinogenic and endocrine disrupting properties. They are formed in all incomplete combustion processes of organic matter and, as a consequence, ubiquitous in the environment. Benzo(a)pyrene (BaP) is on the priority list published by the Environmental Agency (US EPA) as the first PAH to be identified as a carcinogen and has often been used as a marker for PAHs contamination in general. It can be found in different types of water samples, therefore, the European Commission set up a limit value of 10 ng L–1 (10 ppt) for BAP in water intended for human consumption. Generally, different chromatographic techniques are used for PAHs determination, but these assays require pre-concentration of analyte, create large amounts of solvent waste, and are relatively time consuming and difficult to perform on-site. An alternative robust, stand-alone, and preferably cheap solution is needed. For example, a sensing unit which can be submerged in a river to monitor and continuously sample BaP. An affinity sensor based on capacitive transduction was developed. Natural antibodies or their synthetic analogues can be used as ligands. Ideally the sensor should operate independently over a longer period of time, e.g. several weeks or months, therefore the use of molecularly imprinted polymers (MIPs) was discussed. MIPs are synthetic antibodies which are selective for a chosen target molecule. Their robustness allows application in environments for which biological recognition elements are unsuitable or denature. They can be reused multiple times, which is essential to meet the stand-alone requirement. BaP is a highly lipophilic compound and does not contain any functional groups in its structure, thus excluding non-covalent imprinting methods based on ionic interactions. Instead, the MIPs syntheses were based on non-covalent hydrophobic and π-π interactions. Different polymerization strategies were compared and the best results were demonstrated by the MIPs produced using electropolymerization. 4-vinylpyridin (VP) and divinylbenzene (DVB) were used as monomer and cross-linker in the polymerization reaction. The selectivity and recovery of the MIP were compared to a non-imprinted polymer (NIP). Electrodes were functionalized with natural receptor (monoclonal anti-BaP antibody) and with MIPs selective towards BaP. Different sets of electrodes were evaluated and their properties such as sensitivity, selectivity and linear range were determined and compared. It was found that both receptor can reach the cut-off level comparable to the established ML, and despite the fact that the antibody showed the better cross-reactivity and affinity, MIPs were more convenient receptor due to their ability to regenerate and stability in river till 7 days.Keywords: antibody, benzo(a)pyrene, capacitive sensor, MIPs, river water
Procedia PDF Downloads 304512 Exploring Accessible Filmmaking and Video for Deafblind Audiences through Multisensory Participatory Design
Authors: Aikaterini Tavoulari, Mike Richardson
Abstract:
Objective: This abstract presents a multisensory participatory design project, inspired by a deafblind PhD student's ambition to climb Mount Everest. The project aims to explore accessible routes for filmmaking and video content creation, catering to the needs of individuals with hearing and sight loss. By engaging participants from the Southwest area of England, recruited through multiple networks, the project seeks to gather qualitative data and insights to inform the development of inclusive media practices. Design: It will be a community-based participatory research design. The workshop will feature various stations that stimulate different senses, such as scent, touch, sight, hearing as well as movement. Participants will have the opportunity to engage with these multisensory experiences, providing valuable feedback on their effectiveness and potential for enhancing accessibility in filmmaking and video content. Methods: Brief semi-structured interviews will be conducted to collect qualitative data, allowing participants to share their perspectives, challenges, and suggestions for improvement. The participatory design approach emphasizes the importance of involving the target audience in the creative process. By actively engaging individuals with hearing and sight loss, the project aims to ensure that their needs and preferences are central to the development of accessible filmmaking techniques and video content. This collaborative effort seeks to bridge the gap between content creators and diverse audiences, fostering a more inclusive media landscape. Results: The findings from this study will contribute to the growing body of research on accessible filmmaking and video content creation. Via inductive thematic analysis of the qualitative data collected through interviews and observations, the researchers aim to identify key themes, challenges, and opportunities for creating engaging and inclusive media experiences for deafblind audiences. The insights will inform the development of best practices and guidelines for accessible filmmaking, empowering content creators to produce more inclusive and immersive video content. Conclusion: The abstract targets the hybrid International Conference for Disability and Diversity in Canada (January 2025), as this platform provides an excellent opportunity to share the outcomes of the project with a global audience of researchers, practitioners, and advocates working towards inclusivity and accessibility in various disability domains. By presenting this research at the conference in person, the authors aim to contribute to the ongoing discourse on disability and diversity, highlighting the importance of multisensory experiences and participatory design in creating accessible media content for the deafblind community and the community with sensory impairments more broadly.Keywords: vision impairment, hearing impairment, deafblindness, accessibility, filmmaking
Procedia PDF Downloads 45511 Collaborative Procurement in the Pursuit of Net- Zero: A Converging Journey
Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John
Abstract:
The Architecture, Engineering, and Construction (AEC) sector plays a critical role in the global transition toward sustainable and net-zero built environments. However, the industry faces unique challenges in planning for net-zero while struggling with low productivity, cost overruns and overall resistance to change. Traditional practices fall short due to their inability to meet the requirements for systemic change, especially as governments increasingly demand transformative approaches. Working in silos and rigid hierarchies and a short-term, client-centric approach prioritising immediate gains over long-term benefit stands in stark contrast to the fundamental requirements for the realisation of net-zero objectives. These practices have limited capacity to effectively integrate AEC stakeholders and promote the essential knowledge sharing required to address the multifaceted challenges of achieving net-zero. In the context of built environment, procurement may be described as the method by which a project proceeds from inception to completion. Collaborative procurement methods under the Integrated Practices (IP) umbrella have the potential to align more closely with net-zero objectives. This paper explores the synergies between collaborative procurement principles and the pursuit of net zero in the AEC sector, drawing upon the shared values of cross-disciplinary collaboration, Early Supply Chain involvement (ESI), use of standards and frameworks, digital information management, strategic performance measurement, integrated decision-making principles and contractual alliancing. To investigate the role of collaborative procurement in advancing net-zero objectives, a structured research methodology was employed. First, the study focuses on a systematic review on the application of collaborative procurement principles in the AEC sphere. Next, a comprehensive analysis is conducted to identify common clusters of these principles across multiple procurement methods. An evaluative comparison between traditional procurement methods and collaborative procurement for achieving net-zero objectives is presented. Then, the study identifies the intersection between collaborative procurement principles and the net-zero requirements. Lastly, an exploration of key insights for AEC stakeholders focusing on the implications and practical applications of these findings is made. Directions for future development of this research are recommended. Adopting collaborative procurement principles can serve as a strategic framework for guiding the AEC sector towards realising net-zero. Synergising these approaches overcomes fragmentation, fosters knowledge sharing, and establishes a net-zero-centered ecosystem. In the context of the ongoing efforts to amplify project efficiency within the built environment, a critical realisation of their central role becomes imperative for AEC stakeholders. When effectively leveraged, collaborative procurement emerges as a powerful tool to surmount existing challenges in attaining net-zero objectives.Keywords: collaborative procurement, net-zero, knowledge sharing, architecture, built environment
Procedia PDF Downloads 74510 Nursing Experience in the Intensive Care of a Lung Cancer Patient with Pulmonary Embolism on Extracorporeal Membrane Oxygenation
Authors: Huang Wei-Yi
Abstract:
Objective: This article explores the intensive care nursing experience of a lung cancer patient with pulmonary embolism who was placed on ECMO. Following a sudden change in the patient’s condition and a consensus reached during a family meeting, the decision was made to withdraw life-sustaining equipment and collaborate with the palliative care team. Methods: The nursing period was from October 20 to October 27, 2023. The author monitored physiological data, observed, provided direct care, conducted interviews, performed physical assessments, and reviewed medical records. Together with the critical care team and bypass personnel, a comprehensive assessment was conducted using Gordon's Eleven Functional Health Patterns to identify the patient’s health issues, which included pain related to lung cancer and invasive devices, fear of death due to sudden deterioration, and altered tissue perfusion related to hemodynamic instability. Results: The patient was admitted with fever, back pain, and painful urination. During hospitalization, the patient experienced sudden discomfort followed by cardiac arrest, requiring multiple CPR attempts and ECMO placement. A subsequent CT angiogram revealed a pulmonary embolism. The patient's condition was further complicated by severe pain due to compression fractures, and a diagnosis of terminal lung cancer was unexpectedly confirmed, leading to emotional distress and uncertainty about future treatment. Throughout the critical care process, ECMO was removed on October 24, stabilizing the patient’s body temperature between 36.5-37°C and maintaining a mean arterial pressure of 60-80 mmHg. Pain management, including Morphine 8mg in 0.9% N/S 100ml IV drip q6h PRN and Ultracet 37.5 mg/325 mg 1# PO q6h, kept the pain level below 3. The patient was transferred to the ward on October 27 and discharged home on October 30. Conclusion: During the care period, collaboration with the medical team and palliative care professionals was crucial. Adjustments to pain medication, symptom management, and lung cancer-targeted therapy improved the patient’s physical discomfort and pain levels. By applying the unique functions of nursing and the four principles of palliative care, positive encouragement was provided. Family members, along with social workers, clergy, psychologists, and nutritionists, participated in cross-disciplinary care, alleviating anxiety and fear. The consensus to withdraw ECMO and life-sustaining equipment enabled the patient and family to receive high-quality care and maintain autonomy in decision-making. A follow-up call on November 1 confirmed that the patient was emotionally stable, pain-free, and continuing with targeted lung cancer therapy.Keywords: intensive care, lung cancer, pulmonary embolism, ECMO
Procedia PDF Downloads 30509 An Exploratory Study of Changing Organisational Practices of Third-Sector Organisations in Mandated Corporate Social Responsibility in India
Authors: Avadh Bihari
Abstract:
Corporate social responsibility (CSR) has become a global parameter to define corporates' ethical and responsible behaviour. It was a voluntary practice in India till 2013, driven by various guidelines, which has become a mandate since 2014 under the Companies Act, 2013. This has compelled the corporates to redesign their CSR strategies by bringing in structures, planning, accountability, and transparency in their processes with a mandate to 'comply or explain'. Based on the author's M.Phil. dissertation, this paper presents the changes in organisational practices and institutional mechanisms of third-sector organisations (TSOs) with the theoretical frameworks of institutionalism and co-optation. It became an interesting case as India is the only country to have a law on CSR, which is not only mandating the reporting but the spending too. The space of CSR in India is changing rapidly and affecting multiple institutions, in the context of the changing roles of the state, market, and TSOs. Several factors such as stringent regulation on foreign funding, mandatory CSR pushing corporates to look out for NGOs, and dependency of Indian NGOs on CSR funds have come to the fore almost simultaneously, which made it an important area of study. Further, the paper aims at addressing the gap in the literature on the effects of mandated CSR on the functioning of TSOs through the empirical and theoretical findings of this study. The author had adopted an interpretivist position in this study to explore changes in organisational practices from the participants' experiences. Data were collected through in-depth interviews with five corporate officials, eleven officials from six TSOs, and two academicians, located at Mumbai and Delhi, India. The findings of this study show the legislation has institutionalised CSR, and TSOs get co-opted in the process of implementing mandated CSR. Seventy percent of the corporates implement their CSR projects through TSOs in India; this has affected the organisational practices of TSOs to a large extent. They are compelled to recruit expert workforce, create new departments for monitoring & evaluation, communications, and adopt management practices of project implementation from corporates. These are attempts to institutionalise the TSOs so that they can produce calculated results as demanded by corporates. In this process, TSOs get co-opted in a struggle to secure funds and lose their autonomy. The normative, coercive, and mimetic isomorphisms of institutionalism come into play as corporates are mandated to take up CSR, thereby influencing the organisational practices of TSOs. These results suggest that corporates and TSOs require an understanding of each other's work culture to develop mutual respect and work towards the goal of sustainable development of the communities. Further, TSOs need to retain their autonomy and understanding of ground realities without which they become an extension of the corporate-funder. For a successful CSR project, engagement beyond funding is required from corporate, through their involvement and not interference. CSR-led community development can be structured by management practices to an extent, but cannot overshadow the knowledge and experience of TSOs.Keywords: corporate social responsibility, institutionalism, organisational practices, third-sector organisations
Procedia PDF Downloads 116508 Intervening between Family Functioning and Depressive Symptoms: Effect of Deprivation of Liberty, Self-Efficacy and Differentiation of Self
Authors: Jasna Hrncic
Abstract:
Poor family relations predict depression, but also to other mental health issues. Mediating effect of self-efficacy and differentiation of self and moderating effect of decreased accessibility and/or success of other adaptive and defensive mechanisms for overcoming social disadvantages could explain depression as a specific outcome of dysfunctional family relations. The present study analyzes the mediation effect of self-efficacy and differentiation of self from poor family functioning to depressive symptoms and the moderation effect of deprivation of liberty on the listed mediation effect. Deprivation of liberty has, as a general consequence, a decreased accessibility and/or success of many adaptive and defensive mechanisms. It is hypothesized that: 1) self-efficacy and differentiation of self will mediate between family functioning and depressiveness in the total sample, and 2) deprivation of liberty will moderate the stated relations. Cross-sectional study was conducted among 323 male juveniles in Serbia divided in three groups: 98 adolescents deprived of their liberty due to antisocial behavior (incarcerated antisocial group - IAG), 121 adolescents with antisocial behavior in their natural setting (antisocial control group - CAG) and 105 adolescents in general population (general control group - CGG). The CAG was included along with GCG to control the possible influence that comorbidity of antisocial behavior and depressiveness could have on results. Instruments for family relations assessment were: for a whole family of origin the emotional exchange scale and individuation scale from GRADIR by Knezevic, and for a relationship with mother PCS-YSR and CRPBI by barber, and intimacy, rejection, sacrifice, punishment, demands, control and internal control by Opacic and Kos. Differentiation of self (DOS) is measured by emotional self scale (Opacic), self-efficacy (SE) by general incompetence scale by Bezinovic, and depression by BDI (Back), CES-D (Radloff) and D6R (Momirovic). Two-path structural equation modeling based on most commonly reported fit indices, showed that the mediation model had unfavorable fit to our data for total sample [(χ2 (1, N = 324) = 13.73); RMSEA= .20 (90% CI= [.12, .30]); CFI= .98; NFI= .97; AIC=31.73]. Path model provided an adequate fit to the data only for AIG - and not to the data from ACG and GCG. SE and DOS mediated the relationship between PFF and depressiveness. Test of the indirect effects revealed that 23.85% of PFF influences on depressiveness is mediated by these two mediators (the quotient of mediated effect = .24). Test of specific indirect effects showed that SE mediates 22.17%, while DOS mediates 1.67% of PFF influence on depressiveness. Lack of expected mediation effect could be explained by missing other potential mediators (i.e., relationship with that father, social skills, self-esteem) and lower variability of both predictor and criterion variable due to their low levels on the whole sample and on control subsamples. Results suggested that inaccessibility and/or successfulness of other adaptive and defensive mechanisms for overcoming social disadvantages has a strong impact on the mediation effect of self/efficacy and differentiation of self from poor family functioning to depressive symptoms. Further researches could include other potential mediators and a sample of clinically depressed people.Keywords: antisocial behavior, mediating effect, moderating effect, natural setting, incarceration
Procedia PDF Downloads 117507 Edmonton Urban Growth Model as a Support Tool for the City Plan Growth Scenarios Development
Authors: Sinisa J. Vukicevic
Abstract:
Edmonton is currently one of the youngest North American cities and has achieved significant growth over the past 40 years. Strong urban shift requires a new approach to how the city is envisioned, planned, and built. This approach is evidence-based scenario development, and an urban growth model was a key support tool in framing Edmonton development strategies, developing urban policies, and assessing policy implications. The urban growth model has been developed using the Metronamica software platform. The Metronamica land use model evaluated the dynamic of land use change under the influence of key development drivers (population and employment), zoning, land suitability, and land and activity accessibility. The model was designed following the Big City Moves ideas: become greener as we grow, develop a rebuildable city, ignite a community of communities, foster a healing city, and create a city of convergence. The Big City Moves were converted to three development scenarios: ‘Strong Central City’, ‘Node City’, and ‘Corridor City’. Each scenario has a narrative story that expressed scenario’s high level goal, scenario’s approach to residential and commercial activities, to transportation vision, and employment and environmental principles. Land use demand was calculated for each scenario according to specific density targets. Spatial policies were analyzed according to their level of importance within the policy set definition for the specific scenario, but also through the policy measures. The model was calibrated on the way to reproduce known historical land use pattern. For the calibration, we used 2006 and 2011 land use data. The validation is done independently, which means we used the data we did not use for the calibration. The model was validated with 2016 data. In general, the modeling process contain three main phases: ‘from qualitative storyline to quantitative modelling’, ‘model development and model run’, and ‘from quantitative modelling to qualitative storyline’. The model also incorporates five spatial indicators: distance from residential to work, distance from residential to recreation, distance to river valley, urban expansion and habitat fragmentation. The major finding of this research could be looked at from two perspectives: the planning perspective and technology perspective. The planning perspective evaluates the model as a tool for scenario development. Using the model, we explored the land use dynamic that is influenced by a different set of policies. The model enables a direct comparison between the three scenarios. We explored the similarities and differences of scenarios and their quantitative indicators: land use change, population change (and spatial allocation), job allocation, density (population, employment, and dwelling unit), habitat connectivity, proximity to objects of interest, etc. From the technology perspective, the model showed one very important characteristic: the model flexibility. The direction for policy testing changed many times during the consultation process and model flexibility in applying all these changes was highly appreciated. The model satisfied our needs as scenario development and evaluation tool, but also as a communication tool during the consultation process.Keywords: urban growth model, scenario development, spatial indicators, Metronamica
Procedia PDF Downloads 95506 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 28505 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 254504 Effects of Macro and Micro Nutrients on Growth and Yield Performances of Tomato (Lycopersicon esculentum MILL.)
Authors: K. M. S. Weerasinghe, A. H. K. Balasooriya, S. L. Ransingha, G. D. Krishantha, R. S. Brhakamanagae, L. C. Wijethilke
Abstract:
Tomato (Lycopersicon esculentum Mill.) is a major horticultural crop with an estimated global production of over 120 million metric tons and ranks first as a processing crop. The average tomato productivity in Sri Lanka (11 metric tons/ha) is much lower than the world average (24 metric tons/ha).To meet the tomato demand for the increasing population the productivity has to be intensified through the agronomic-techniques. Nutrition is one of the main factors which govern the growth and yield of tomato and the main nutrient source soil affect the plant growth and quality of the produce. Continuous cropping, improper fertilizer usage etc., cause widespread nutrient deficiencies. Therefore synthetic fertilizers and organic manures were introduced to enhance plant growth and maximize the crop yields. In this study, effects of macro and micronutrient supplementations on improvement of growth and yield of tomato were investigated. Selected tomato variety is Maheshi and plants were grown in Regional Agricultural and Research Centre Makadura under the Department of Agriculture recommended (DOA) macro nutrients and various combination of Ontario recommended dosages of secondary and micro fertilizer supplementations. There were six treatments in this experiment and each treatment was replicated in three times and each replicate consisted of six plants. Other than the DOA recommendation, five combinations of Ontario recommended dosage of secondary and micronutrients for tomato were also used as treatments. The treatments were arranged in a Randomized Complete Block Design. All cultural practices were carried out according to the DOA recommendations. The mean data was subjected to the statistical analysis using SAS package and mean separation (Duncan’s Multiple Range test at 5% probability level) procedures. Secondary and micronutrients containing treatments significantly increased most of the growth parameters. Plant height, plant girth, number of leaves, leaf area index etc. Fruits harvested from pots amended with macro, secondary and micronutrients performed best in terms of total yield; yield quality; to pots amended with DOA recommended dosage of fertilizer for tomato. It could be due to the application of all essential macro and micro nutrients that rise in photosynthetic activity, efficient translocation and utilization of photosynthates causing rapid cell elongation and cell division in actively growing region of the plant leading to stimulation of growth and yield were caused. The experiment revealed and highlighted the requirements of essential macro, secondary and micro nutrient fertilizer supplementations for tomato farming. The study indicated that, macro and micro nutrient supplementation practices can influence growth and yield performances of tomato fruits and it is a promising approach to get potential tomato yields.Keywords: macro and micronutrients, tomato, SAS package, photosynthates
Procedia PDF Downloads 476503 The Impact of Gestational Weight Gain on Subclinical Atherosclerosis, Placental Circulation and Neonatal Complications
Authors: Marina Shargorodsky
Abstract:
Aim: Gestational weight gain (GWG) has been related to altering future weight-gain curves and increased risks of obesity later in life. Obesity may contribute to vascular atherosclerotic changes as well as excess cardiovascular morbidity and mortality observed in these patients. Noninvasive arterial testing, such as ultrasonographic measurement of carotid IMT, is considered a surrogate for systemic atherosclerotic disease burden and is predictive of cardiovascular events in asymptomatic individuals as well as recurrent events in patients with known cardiovascular disease. Currently, there is no consistent evidence regarding the vascular impact of excessive GWG. The present study was designed to investigate the impact of GWG on early atherosclerotic changes during late pregnancy, using intima-media thickness, as well as placental vascular circulation and inflammatory lesions and pregnancy outcomes. Methods: The study group consisted of 59 pregnant women who gave birth and underwent a placental histopathological examination at the Department of Obstetrics and Gynecology, Edith Wolfson Medical Center, Israel, in 2019. According to the IOM guidelines the study group has been divided into two groups: Group 1 included 32 women with pregnancy weight gain within recommended range; Group 2 included 27 women with excessive weight gain during pregnancy. The IMT was measured from non-diseased intimal and medial wall layers of the carotid artery on both sides, visualized by high-resolution 7.5 MHz ultrasound (Apogee CX Color, ATL). Placental histology subdivided placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion according to the criteria of the Society for Pediatric Pathology, subdividing placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion, as well as the inflammatory response of maternal and fetal origin. Results: IMT levels differed between groups and were significantly higher in Group 1 compared to Group 2 (0.7+/-0.1 vs 0.6+/-0/1, p=0.028). Multiple linear regression analysis of IMT included variables based on their associations in univariate analyses with a backward approach. Included in the model were pre-gestational BMI, HDL cholesterol and fasting glucose. The model was significant (p=0.001) and correctly classified 64.7% of study patients. In this model, pre-pregnancy BMI remained a significant independent predictor of subclinical atherosclerosis assessed by IMT (OR 4.314, 95% CI 0.0599-0.674, p=0.044). Among placental lesions related to fetal vascular malperfusion, villous changes consistent with fetal thrombo-occlusive disease (FTOD) were significantly higher in Group 1 than in Group 2, p=0.034). In Conclusion, the present study demonstrated that excessive weight gain during pregnancy is associated with an adverse effect on early stages of subclinical atherosclerosis, placental vascular circulation and neonatal complications. The precise mechanism for these vascular changes, as well as the overall clinical impact of weight control during pregnancy on IMT, placental vascular circulation as well as pregnancy outcomes, deserves further investigation.Keywords: obesity, pregnancy, complications, weight gain
Procedia PDF Downloads 54502 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 107501 Optimization of the Performance of a Solar Concentrator System with a Cavity Receiver Using the Genetic Algorithm
Authors: Foozhan Gharehkhani
Abstract:
The use of solar energy as a sustainable renewable energy source has gained significant attention in recent years. Solar concentrating systems (CSP), which direct solar radiation onto a receiver, are an effective means of producing high-temperature thermal energy. Cavity receivers, known for their high thermal efficiency and reduced heat losses, are particularly noteworthy in these systems. Optimizing their design can enhance energy efficiency and reduce costs. This study leverages the genetic algorithm, a powerful optimization tool inspired by natural evolution, to optimize the performance of a solar concentrator system with a cavity receiver, aiming for a more efficient and cost-effective design. In this study, a system consisting of a solar concentrator and a cavity receiver was analyzed. The concentrator was designed as a parabolic dish, and the receiver had a cylindrical cavity with a helical structure. The primary parameters were defined as the cavity diameter (D), the receiver height (h), and the helical pipe diameter (d). Initially, the system was optimized to achieve the maximum heat flux, and the optimal parameter values along with the maximum heat flux were obtained. Subsequently, a multi-objective optimization approach was applied, aiming to maximize the heat flux while minimizing the system construction cost. The optimization process was conducted using the genetic algorithm implemented in MATLAB with precise execution. The results of this study revealed that the optimal dimensions of the receiver, including the cavity diameter (D), receiver height (h), and helical pipe diameter (d), were determined to be 0.142 m, 0.1385 m, and 0.011 m, respectively. This optimization resulted in improvements of 3% in the cavity diameter, 8% in the height, and 5% in the helical pipe diameter. Furthermore, the results indicated that the primary focus of this research was the accurate thermal modeling of the solar collection system. The simulations and the obtained results demonstrated that the optimization applied to this system maximized its thermal performance and elevated its energy efficiency to a desirable level. Moreover, this study successfully modeled and controlled effective temperature variations at different angles of solar irradiation, highlighting significant improvements in system efficiency. The significance of this research lies in leveraging solar energy as one of the prominent renewable energy sources, playing a key role in replacing fossil fuels. Considering the environmental and economic challenges associated with the excessive use of fossil resources—such as increased greenhouse gas emissions, environmental degradation, and the depletion of fossil energy reserves—developing technologies related to renewable energy has become a vital priority. Among these, solar concentrating systems, capable of achieving high temperatures, are particularly important for industrial and heating applications. This research aims to optimize the performance of such systems through precise design and simulation, making a significant contribution to the advancement of advanced technologies and the efficient utilization of solar energy in Iran, thereby addressing the country's future energy needs effectively.Keywords: cavity receiver, genetic algorithm, optimization, solar concentrator system performance
Procedia PDF Downloads 10500 Evaluating the Effectiveness of Mesotherapy and Topical 2% Minoxidil for Androgenic Alopecia in Females, Using Topical 2% Minoxidil as a Common Treatment
Authors: Hamed Delrobai Ghoochan Atigh
Abstract:
Androgenic alopecia (AGA) is a common form of hair loss, impacting approximately 50% of females, which leads to reduced self-esteem and quality of life. It causes progressive follicular miniaturization in genetically predisposed individuals. Mesotherapy -- a minimally invasive procedure, topical 2% minoxidil, and oral finasteride have emerged as popular treatment options in the realm of cosmetics. However, the efficacy of mesotherapy compared to other options remains unclear. This study aims to assess the effectiveness of mesotherapy when it is added to topical 2% minoxidil treatment on female androgenic alopecia. Mesotherapy, also known as intradermotherapy, is a technique that entails administering multiple intradermal injections of a carefully composed mixture of compounds in low doses, applied at various points in close proximity to or directly over the affected areas. This study involves a randomized controlled trial with 100 female participants diagnosed with androgenic alopecia. The subjects were randomly assigned to two groups: Group A used topical 2% minoxidil twice daily and took Finastride oral tablet. For Group B, 10 mesotherapy sessions were added to the prior treatment. The injections were administered every week in the first month of treatment, every two weeks in the second month, and after that the injections were applied monthly for four consecutive months. The response assessment was made at baseline, the 4th session, and finally after 6 months when the treatment was complete. Clinical photographs, 7-point Likert scale patient self-evaluation, and 7-point Likert scale assessment tool were used to measure the effectiveness of the treatment. During this evaluation, a significant and visible improvement in hair density and thickness was observed. The study demonstrated a significant increase in treatment efficacy in Group B compared to Group A post-treatment, with no adverse effects. Based on the findings, it appears that mesotherapy offers a significant improvement in female AGA over minoxidil. Hair loss was stopped in Group B after one month and improvement in density and thickness of hair was observed after the third month. The findings from this study provide valuable insights into the efficacy of mesotherapy in treating female androgenic alopecia. Our evaluation offers a detailed assessment of hair growth parameters, enabling a better understanding of the treatments' effectiveness. The potential of this promising technique is significantly enhanced when carried out in a medical facility, guided by appropriate indications and skillful execution. An interesting observation in our study is that in areas where the hair had turned grey, the newly regrown hair does not retain its original grey color; instead, it becomes darker. The results contribute to evidence-based decision-making in dermatological practice and offer different insights into the treatment of female pattern hair loss.Keywords: androgenic alopecia, female hair loss, mesotherapy, topical 2% minoxidil
Procedia PDF Downloads 103499 The Role of Intraluminal Endoscopy in the Diagnosis and Treatment of Fluid Collections in Patients With Acute Pancreatitis
Authors: A. Askerov, Y. Teterin, P. Yartcev, S. Novikov
Abstract:
Introduction: Acute pancreatitis (AP) is a socially significant problem for public health and continues to be one of the most common causes of hospitalization of patients with pathology of the gastrointestinal tract. It is characterized by high mortality rates, which reaches 62-65% in infected pancreatic necrosis. Aims & Methods: The study group included 63 patients who underwent transluminal drainage (TLD) fluid collection (FC). All patients were performed transabdominal ultrasound, computer tomography of the abdominal cavity and retroperitoneal organs and endoscopic ultrasound (EUS) of the pancreatobiliary zone. The EUS was used as a final diagnostic method to determine the characteristics of FC. The indications for TLD were: the distance between the wall of the hollow organ and the FC was not more than 1 cm, the absence of large vessels on the puncture trajectory (more than 3 mm), and the size of the formation was more than 5 cm. When a homogeneous cavity with clear, even contours was detected, a plastic stent with rounded ends (“double pig tail”) was installed. The indication for the installation of a fully covered self-expanding stent was the detection of nonhomogeneous anechoic FC with hyperechoic inclusions and cloudy purulent contents. In patients with necrotic forms after drainage of the purulent cavity, a cystonasal drainage with a diameter of 7Fr was installed in its lumen under X-ray control to sanitize the cavity with a 0.05% aqueous solution of chlorhexidine. Endoscopic necrectomy was performed every 24-48 hours. The plastic stent was removed in 6 month, the fully covered self-expanding stent - in 1 month after the patient was discharged from the hospital. Results: Endoscopic TLD was performed in 63 patients. The FC corresponding to interstitial edematous pancreatitis was detected in 39 (62%) patients who underwent TLD with the installation of a plastic stent with rounded ends. In 24 (38%) patients with necrotic forms of FC, a fully covered self-expanding stent was placed. Communication with the ductal system of the pancreas was found in 5 (7.9%) patients. They underwent pancreaticoduodenal stenting. A complicated postoperative period was noted in 4 (6.3%) cases and was manifested by bleeding from the zone of pancreatogenic destruction. In 2 (3.1%) cases, this required angiography and endovascular embolization a. gastroduodenalis, in 1 (1.6%) case, endoscopic hemostasis was performed by filling the cavity with 4 ml of Hemoblock hemostatic solution. The combination of both methods was used in 1 (1.6%) patient. There was no evidence of recurrent bleeding in these patients. Lethal outcome occurred in 4 patients (6.3%). In 3 (4.7%) patients, the cause of death was multiple organ failure, in 1 (1.6%) - severe nosocomial pneumonia that developed on the 32nd day after drainage. Conclusions: 1. EUS is not only the most important method for diagnosing FC in AP, but also allows you to determine further tactics for their intraluminal drainage.2. Endoscopic intraluminal drainage of fluid zones in 45.8% of cases is the final minimally invasive method of surgical treatment of large-focal pancreatic necrosis. Disclosure: Nothing to disclose.Keywords: acute pancreatitis, fluid collection, endoscopy surgery, necrectomy, transluminal drainage
Procedia PDF Downloads 111498 Developing Granular Sludge and Maintaining High Nitrite Accumulation for Anammox to Treat Municipal Wastewater High-efficiently in a Flexible Two-stage Process
Authors: Zhihao Peng, Qiong Zhang, Xiyao Li, Yongzhen Peng
Abstract:
Nowadays, conventional nitrogen removal process (nitrification and denitrification) was adopted in most wastewater treatment plants, but many problems have occurred, such as: high aeration energy consumption, extra carbon sources dosage and high sludge treatment costs. The emergence of anammox has bring about the great revolution to the nitrogen removal technology, and only the ammonia and nitrite were required to remove nitrogen autotrophically, no demand for aeration and sludge treatment. However, there existed many challenges in anammox applications: difficulty of biomass retention, insufficiency of nitrite substrate, damage from complex organic etc. Much effort was put into the research in overcoming the above challenges, and the payment was rewarded. It was also imperative to establish an innovative process that can settle the above problems synchronously, after all any obstacle above mentioned can cause the collapse of anammox system. Therefore, in this study, a two-stage process was established that the sequencing batch reactor (SBR) and upflow anaerobic sludge blanket (UASB) were used in the pre-stage and post-stage, respectively. The domestic wastewater entered into the SBR first and went through anaerobic/aerobic/anoxic (An/O/A) mode, and the draining at the aerobic end of SBR was mixed with domestic wastewater, the mixture then entering to the UASB. In the long term, organic and nitrogen removal performance was evaluated. All along the operation, most COD was removed in pre-stage (COD removal efficiency > 64.1%), including some macromolecular organic matter, like: tryptophan, tyrosinase and fulvic acid, which could weaken the damage of organic matter to anammox. And the An/O/A operating mode of SBR was beneficial to the achievement and maintenance of partial nitrification (PN). Hence, sufficient and steady nitrite supply was another favorable condition to anammox enhancement. Besides, the flexible mixing ratio helped to gain a substrate ratio appropriate to anammox (1.32-1.46), which further enhance the anammox. Further, the UASB was used and gas recirculation strategy was adopted in the post-stage, aiming to achieve granulation by the selection pressure. As expected, the granules formed rapidly during 38 days, which increased from 153.3 to 354.3 μm. Based on bioactivity and gene measurement, the anammox metabolism and abundance level rose evidently, by 2.35 mgN/gVss·h and 5.3 x109. The anammox bacteria mainly distributed in the large granules (>1000 μm), while the biomass in the flocs (<200 μm) and microgranules (200-500 μm) barely displayed anammox bioactivity. Enhanced anammox promoted the advanced autotrophic nitrogen removal, which increased from 71.9% to 93.4%, even when the temperature was only 12.9 ℃. Therefore, it was feasible to enhance anammox in the multiple favorable conditions created, and the strategy extended the application of anammox to the full-scale mainstream, enhanced the understanding of anammox in the aspects of culturing conditions.Keywords: anammox, granules, nitrite accumulation, nitrogen removal efficiency
Procedia PDF Downloads 49497 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy
Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay
Abstract:
Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.Keywords: trauma, coagulopathy, prediction, model
Procedia PDF Downloads 176496 Explaining Motivation in Language Learning: A Framework for Evaluation and Research
Authors: Kim Bower
Abstract:
Evaluating and researching motivation in language learning is a complex and multi-faceted activity. Various models for investigating learner motivation have been proposed in the literature, but no one model supplies a complex and coherent model for investigating a range of motivational characteristics. Here, such a methodological framework, which includes exemplification of sources of evidence and potential methods of investigation, is proposed. The process model for the investigation of motivation within language learning settings proposed is based on a complex dynamic systems perspective that takes account of cognition and affects. It focuses on three overarching aspects of motivation: the learning environment, learner engagement and learner identities. Within these categories subsets are defined: the learning environment incorporates teacher, course and group specific aspects of motivation; learner engagement addresses the principal characteristics of learners' perceived value of activities, their attitudes towards language learning, their perceptions of their learning and engagement in learning tasks; and within learner identities, principal characteristics of self-concept and mastery of the language are explored. Exemplifications of potential sources of evidence in the model reflect the multiple influences within and between learner and environmental factors and the possible changes in both that may emerge over time. The model was initially developed as a framework for investigating different models of Content and Language Integrated Learning (CLIL) in contrasting contexts in secondary schools in England. The study, from which examples are drawn to exemplify the model, aimed to address the following three research questions: (1) in what ways does CLIL impact on learner motivation? (2) what are the main elements of CLIL that enhance motivation? and (3) to what extent might these be transferable to other contexts? This new model has been tried and tested in three locations in England and reported as case studies. Following an initial visit to each institution to discuss the qualitative research, instruments were developed according to the proposed model. A questionnaire was drawn up and completed by one group prior to a 3-day data collection visit to each institution, during which interviews were held with academic leaders, the head of the department, the CLIL teacher(s), and two learner focus groups of six-eight learners. Interviews were recorded and transcribed verbatim. 2-4 naturalistic observations of lessons were undertaken in each setting, as appropriate to the context, to provide colour and thereby a richer picture. Findings were subjected to an interpretive analysis by the themes derived from the process model and are reported elsewhere. The model proved to be an effective and coherent framework for planning the research, instrument design, data collection and interpretive analysis of data in these three contrasting settings, in which different models of language learning were in place. It is hoped that the proposed model, reported here together with exemplification and commentary, will enable teachers and researchers in a wide range of language learning contexts to investigate learner motivation in a systematic and in-depth manner.Keywords: investigate, language-learning, learner motivation model, dynamic systems perspective
Procedia PDF Downloads 270495 Time Travel Testing: A Mechanism for Improving Renewal Experience
Authors: Aritra Majumdar
Abstract:
While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas
Procedia PDF Downloads 160494 Assessing of Social Comfort of the Russian Population with Big Data
Authors: Marina Shakleina, Konstantin Shaklein, Stanislav Yakiro
Abstract:
The digitalization of modern human life over the last decade has facilitated the acquisition, storage, and processing of data, which are used to detect changes in consumer preferences and to improve the internal efficiency of the production process. This emerging trend has attracted academic interest in the use of big data in research. The study focuses on modeling the social comfort of the Russian population for the period 2010-2021 using big data. Big data provides enormous opportunities for understanding human interactions at the scale of society with plenty of space and time dynamics. One of the most popular big data sources is Google Trends. The methodology for assessing social comfort using big data involves several steps: 1. 574 words were selected based on the Harvard IV-4 Dictionary adjusted to fit the reality of everyday Russian life. The set of keywords was further cleansed by excluding queries consisting of verbs and words with several lexical meanings. 2. Search queries were processed to ensure comparability of results: the transformation of data to a 10-point scale, elimination of popularity peaks, detrending, and deseasoning. The proposed methodology for keyword search and Google Trends processing was implemented in the form of a script in the Python programming language. 3. Block and summary integral indicators of social comfort were constructed using the first modified principal component resulting in weighting coefficients values of block components. According to the study, social comfort is described by 12 blocks: ‘health’, ‘education’, ‘social support’, ‘financial situation’, ‘employment’, ‘housing’, ‘ethical norms’, ‘security’, ‘political stability’, ‘leisure’, ‘environment’, ‘infrastructure’. According to the model, the summary integral indicator increased by 54% and was 4.631 points; the average annual rate was 3.6%, which is higher than the rate of economic growth by 2.7 p.p. The value of the indicator describing social comfort in Russia is determined by 26% by ‘social support’, 24% by ‘education’, 12% by ‘infrastructure’, 10% by ‘leisure’, and the remaining 28% by others. Among 25% of the most popular searches, 85% are of negative nature and are mainly related to the blocks ‘security’, ‘political stability’, ‘health’, for example, ‘crime rate’, ‘vulnerability’. Among the 25% most unpopular queries, 99% of the queries were positive and mostly related to the blocks ‘ethical norms’, ‘education’, ‘employment’, for example, ‘social package’, ‘recycling’. In conclusion, the introduction of the latent category ‘social comfort’ into the scientific vocabulary deepens the theory of the quality of life of the population in terms of the study of the involvement of an individual in the society and expanding the subjective aspect of the measurements of various indicators. Integral assessment of social comfort demonstrates the overall picture of the development of the phenomenon over time and space and quantitatively evaluates ongoing socio-economic policy. The application of big data in the assessment of latent categories gives stable results, which opens up possibilities for their practical implementation.Keywords: big data, Google trends, integral indicator, social comfort
Procedia PDF Downloads 203493 Parenting Interventions for Refugee Families: A Systematic Scoping Review
Authors: Ripudaman S. Minhas, Pardeep K. Benipal, Aisha K. Yousafzai
Abstract:
Background: Children of refugee or asylum-seeking background have multiple, complex needs (e.g. trauma, mental health concerns, separation, relocation, poverty, etc.) that places them at an increased risk for developing learning problems. Families encounter challenges accessing support during resettlement, preventing children from achieving their full developmental potential. There are very few studies in literature that examine the unique parenting challenges refugee families’ face. Providing appropriate support services and educational resources that address these distinctive concerns of refugee parents, will alleviate these challenges allowing for a better developmental outcome for children. Objective: To identify the characteristics of effective parenting interventions that address the unique needs of refugee families. Methods: English-language articles published from 1997 onwards were included if they described or evaluated programmes or interventions for parents of refugee or asylum-seeking background, globally. Data were extracted and analyzed according to Arksey and O’Malley’s descriptive analysis model for scoping reviews. Results: Seven studies met criteria and were included, primarily studying families settled in high-income countries. Refugee parents identified parenting to be a major concern, citing they experienced: alienation/unwelcoming services, language barriers, and lack of familiarity with school and early years services. Services that focused on building the resilience of parents, parent education, or provided services in the family’s native language, and offered families safe spaces to promote parent-child interactions were most successful. Home-visit and family-centered programs showed particular success, minimizing barriers such as transportation and inflexible work schedules, while allowing caregivers to receive feedback from facilitators. The vast majority of studies evaluated programs implementing existing curricula and frameworks. Interventions were designed in a prescriptive manner, without direct participation by family members and not directly addressing accessibility barriers. The studies also did not employ evaluation measures of parenting practices or the caregiving environment, or child development outcomes, primarily focusing on parental perceptions. Conclusion: There is scarce literature describing parenting interventions for refugee families. Successful interventions focused on building parenting resilience and capacity in their native language. To date, there are no studies that employ a participatory approach to program design to tailor content or accessibility, and few that employ parenting, developmental, behavioural, or environmental outcome measures.Keywords: asylum-seekers, developmental pediatrics, parenting interventions, refugee families
Procedia PDF Downloads 165492 Retrospective Assessment of the Safety and Efficacy of Percutaneous Microwave Ablation in the Management of Hepatic Lesions
Authors: Suang K. Lau, Ismail Goolam, Rafid Al-Asady
Abstract:
Background: The majority of patients with hepatocellular carcinoma (HCC) are not suitable for curative treatment, in the form of surgical resection or transplantation, due to tumour extent and underlying liver dysfunction. In these non-resectable cases, a variety of non-surgical therapies are available, including microwave ablation (MWA), which has shown increasing popularity due to its low morbidity, low reported complication rate, and the ability to perform multiple ablations simultaneously. Objective: The aim of this study was to evaluate the validity of MWA as a viable treatment option in the management of HCC and hepatic metastatic disease, by assessing its efficacy and complication rate at a tertiary hospital situated in Westmead (Australia). Methods: A retrospective observational study was performed evaluating patients that underwent MWA between 1/1/2017–31/12/2018 at Westmead Hospital, NSW, Australia. Outcome measures, including residual disease, recurrence rates, as well as major and minor complication rates, were retrospectively analysed over a 12-months period following MWA treatment. Excluded patients included those whose lesions were treated on the basis of residual or recurrent disease from previous treatment, which occurred prior to the study window (11 patients) and those who were lost to follow up (2 patients). Results: Following treatment of 106 new hepatic lesions, the complete response rate (CR) was 86% (91/106) at 12 months follow up. 10 patients had the residual disease at post-treatment follow up imaging, corresponding to an incomplete response (ICR) rate of 9.4% (10/106). The local recurrence rate (LRR) was 4.6% (5/106) with follow-up period up to 12 months. The minor complication rate was 9.4% (10/106) including asymptomatic pneumothorax (n=2), asymptomatic pleural effusions (n=2), right lower lobe pneumonia (n=3), pain requiring admission (n=1), hypotension (n=1), cellulitis (n=1) and intraparenchymal hematoma (n=1). There was 1 major complication reported, with pleuro-peritoneal fistula causing recurrent large pleural effusion necessitating repeated thoracocentesis (n=1). There was no statistically significant association between tumour size, location or ablation factors, and risk of recurrence or residual disease. A subset analysis identified 6 segment VIII lesions, which were treated via a trans-pleural approach. This cohort demonstrated an overall complication rate of 33% (2/6), including 1 minor complication of asymptomatic pneumothorax and 1 major complication of pleuro-peritoneal fistula. Conclusions: Microwave ablation therapy is an effective and safe treatment option in cases of non-resectable hepatocellular carcinoma and liver metastases, with good local tumour control and low complication rates. A trans-pleural approach for high segment VIII lesions is associated with a higher complication rate and warrants greater caution.Keywords: hepatocellular carcinoma, liver metastases, microwave ablation, trans-pleural approach
Procedia PDF Downloads 137491 The Impact of Non State Actor’s to Protect Refugees in Kurdistan Region of Iraq
Authors: Rozh Abdulrahman Kareem
Abstract:
The displacement of individuals has become a common interest for international players. Mostly occurs in Islamic states, as religion is considered the most common cause of this form of displacement. Therefore, this thesis aims to depict the reality of the situations of the refugees, particularly in KRI, illustrating how they are treated and protected and if the treatment merits the protection clause as envisaged in the 1951 Refugee Convention. Overall, the aim is to touch on the issue of protection by non-governmental organizations and government towards the refugees here. In light of this, it focused on the adequate protection of refugees in relation to the refugee law. In the Middle East, including Iraq, there have been multiple reports on violations of these refugee laws and human rights. Protection involves providing physical security to the concerned parties, functional administration with legal structures, and infrastructural setup that could help citizens exercise rights. The KRI has provided the refugees with various benefits, including education, access to residency, and employment. It also provided transitionary in various social dimensions like gender-based violence. The Convention on Status of Refugees 1951 tried to resolve this problem, whereby the principle of ‘nonrefoulement’ under Article 33 was passed. The ‘nonrefoulement’, an exceptional reference, was enacted to protect refugees from forcible return to their countries of the original. However, the convention never addressed an unusual scenario regarding the application of this principle, ‘Extradition Treaties.’ Even though some scholarly article exists regarding the problems of refugees, the topic of interplay between Nonrefoulement and Extradition Treaties has never been explained in detail in the available books on refugee laws and practices. Each year, millions of refugees seek protection from foreign countries for fear of being tortured, victimized, or executed. People seeking international protection are susceptible and insecure. The main objective of the prevention is to provide security to citizens susceptible to inhuman treatment, distress, oppression, or other human rights defilements when they arrive back in their own countries. The refugee situation may get worse in the near future. Just like several nations within the Middle East, Iraq is not a signatory to the globally acknowledged legal structure for the protection of refugees. The first law of 1971 in Iraq was issued only for military or political causes. This law also establishes benefits such as the right to education and health services and the right to acquire employment just as the Iraqi nationals. The other legislative instrument is the 21st law from the ministry of migration of Iraq widened the description of an immigrant to incorporate the definition from the refugee resolution. Nonetheless, there is a lack of overall consistency in the protection provided under these legislations regarding rights and entitlement. A Memorandum of Understanding was signed in October 2016 by the UNHCR and the Iraq government to develop the protection of refugees. Under the term of this MoU, the Iraqi Government is obligated to provide identity documents to asylum seekers beside that UNHCR provides more guidance.Keywords: law, refugee, protection, Kurdistan
Procedia PDF Downloads 64