Search results for: uncertainty quantification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1419

Search results for: uncertainty quantification

309 Development of Multi-Leaf Collimator-Based Isocenter Verification Tool Using Electrical Portal Imaging Device for Stereotactic Radiosurgery

Authors: Panatda Intanin, Sangutid Thongsawad, Chirapha Tannanonta, Todsaporn Fuangrod

Abstract:

Stereotactic radiosurgery (SRS) is a highly precision delivery technique that requires comprehensive quality assurance (QA) tests prior to treatment delivery. An isocenter of delivery beam plays a critical role that affect the treatment accuracy. The uncertainty of isocenter is traditionally accessed using circular cone equipment, Winston-Lutz (WL) phantom and film. This technique is considered time consuming and highly dependent on the observer. In this work, the development of multileaf collimator (MLC)-based isocenter verification tool using electronic portal imaging device (EPID) was proposed and evaluated. A mechanical isocenter alignment with ball bearing diameter 5 mm and circular cone diameter 10 mm fixed to gantry head defines the radiation field was set as the conventional WL test method. The conventional setup was to compare to the proposed setup; using MLC (10 x 10 mm) to define the radiation filed instead of cone. This represents more realistic delivery field than using circular cone equipment. The acquisition from electronic portal imaging device (EPID) and radiographic film were performed in both experiments. The gantry angles were set as following: 0°, 90°, 180° and 270°. A software tool was in-house developed using MATLAB/SIMULINK programming to determine the centroid of radiation field and shadow of WL phantom automatically. This presents higher accuracy than manual measurement. The deviation between centroid of both cone-based and MLC-based WL tests were quantified. To compare between film and EPID image, the deviation for all gantry angle was 0.26±0.19mm and 0.43±0.30 for cone-based and MLC-based WL tests. For the absolute deviation calculation on EPID images between cone and MLC-based WL test was 0.59±0.28 mm and the absolute deviation on film images was 0.14±0.13 mm. Therefore, the MLC-based isocenter verification using EPID present high sensitivity tool for SRS QA.

Keywords: isocenter verification, quality assurance, EPID, SRS

Procedia PDF Downloads 112
308 Readiness of Iran’s Insurance Industry Salesforce to Accept Changing to Become Islamic Personal Financial Planners

Authors: Pedram Saadati, Zahra Nazari

Abstract:

Today, the role and importance of financial technology businesses in Iran have increased significantly. Although, in Iran, there is no Islamic or non-Islamic personal financial planning field of study in the universities or educational centers, the profession of personal financial planning is not defined, and there is no software introduced in this regard for advisors or consumers. The largest sales network of financial services in Iran belongs to the insurance industry, and there is an untapped market for international companies in Iran that can contribute to 130 thousand representatives in the insurance industry and 28 million families by providing training and personal financial advisory software. To the best of the author's knowledge, despite the lack of previous internal studies in this field, the present study investigates the level of readiness of the salesforce of the insurance industry to accept this career and its technology. The statistical population of the research is made up of managers, insurance sales representatives, assistants and heads of sales departments of insurance companies. An 18-minute video was prepared that introduced and taught the job of Islamic personal financial planning and explained its difference from its non-Islamic model. This video was provided to the respondents. The data collection tool was a research-made questionnaire. To investigate the factors affecting technology acceptance and job change, independent T descriptive statistics and Pearson correlation were used, and Friedman's test was used to rank the effective factors. The results indicate the mental perception and very positive attitude of the insurance industry activists towards the usefulness of this job and its technology, and the studied sample confirmed the intention of training in this knowledge. Based on research results, the change in the customer's attitude towards the insurance advisor and the possibility of increasing income are considered as the reasons for accepting. However, Restrictions on using investment opportunities due to Islamic financial services laws and the uncertainty of the position of the central insurance in this regard are considered as the most important obstacles.

Keywords: fintech, insurance, personal financial planning, wealth management

Procedia PDF Downloads 11
307 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction

Authors: Joy Cao, Min Zhou

Abstract:

Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.

Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.

Procedia PDF Downloads 53
306 Literature Review on the Barriers to Access Credit for Small Agricultural Producers and Policies to Mitigate Them in Developing Countries

Authors: Margarita Gáfaro, Karelys Guzmán, Paola Poveda

Abstract:

This paper establishes the theoretical aspects that explain the barriers to accessing credit for small agricultural producers in developing countries and identifies successful policy experiences to mitigate them. We will test two hypotheses. The first one is that information asymmetries, high transaction costs and high-risk exposure limit the supply of credit to small agricultural producers in developing countries. The second hypothesis is that low levels of financial education and productivity and high uncertainty about the returns of agricultural activity limit the demand for credit. To test these hypotheses, a review of the theoretical and empirical literature on access to rural credit in developing countries will be carried out. The first part of this review focuses on theoretical models that incorporate information asymmetries in the credit market and analyzes the interaction between these asymmetries and the characteristics of the agricultural sector in developing countries. Some of the characteristics we will focus on are the absence of collateral, the underdevelopment of the judicial systems and insurance markets, and the high dependence on climatic factors of production technologies. The second part of this review focuses on the determinants of credit demand by small agricultural producers, including the profitability of productive projects, security conditions, risk aversion or loss, financial education, and cognitive biases, among others. There are policies that focus on resolving these supply and demand constraints and managing to improve credit access. Therefore, another objective of this paper is to present a review of effective policies that have promoted access to credit for smallholders in the world. For this, information available in policy documents will be collected. This information will be complemented by interviews with officials in charge of the design and execution of these policies in a subset of selected countries. The information collected will be analyzed in light of the conceptual framework proposed in the first two parts of this section. The barriers to access to credit that each policy attempts to resolve and the factors that could explain its effectiveness will be identified.

Keywords: agricultural economics, credit access, smallholder, developing countries

Procedia PDF Downloads 33
305 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste

Authors: Rajeev Ravindran, Amit K. Jaiswal

Abstract:

Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.

Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound

Procedia PDF Downloads 338
304 Characters of Developing Commercial Employment Sub-Centres and Employment Density in Ahmedabad City

Authors: Bhaumik Patel, Amit Gotecha

Abstract:

Commercial centres of different hierarchy and sizes play a vital role in the growth and development of the city. Economic uncertainty and demand for space leads to more urban sprawl and emerging more commercial spaces. The study was focused on the understanding of various indicators affecting the commercial development that can help to solve many issues related to commercial urban development and can guide for future employment growth centre development, Accessibility, Infrastructure, Planning and development regulations and Market forces. The aim of the study was to review characteristics and identifying employment density of Commercial Employment Sub-centres by achieving objectives Understanding various employment sub-centres, Identifying characteristics and deriving behaviour of employment densities and Evaluating and comparing employment sub-centres for the Ahmedabad city. Commercial employment sub-centres one in old city (Kalupur), second in highly developed commercial (C.G.road-Ashram road) and third in the latest developing commercial area (Prahladnagar) were identified by distance from city centre, Land use diversity, Access to Major roads and Public transport, Population density in proximity, Complimentary land uses in proximity and Land price. Commercial activities were categorised into retail, wholesale and service sector and sub categorised into various activities. From the study, Time period of establishment of the unit is a critical parameter for commercial activity, building height, and land-use diversity. Employment diversity is also one parameter for the commercial centre. The old city has retail, wholesale and trading and higher commercial density concerning units and employment both. Prahladnagar area functioned as commercial due to market pressure and developed as more units rather than a requirement. Employment density is higher in the centre of the city, as far as distance increases from city centre employment density and unit density decreases. Characters of influencing employment density and unit density are distance from city centre, development type, establishment time period, building density, unit density, public transport accessibility and road connectivity.

Keywords: commercial employment sub-centres, employment density, employment diversity, unit density

Procedia PDF Downloads 108
303 Business Model Innovation and Firm Performance: Exploring Moderation Effects

Authors: Mohammad-Ali Latifi, Harry Bouwman

Abstract:

Changes in the business environment accelerated dramatically over the last decades as a result of changes in technology, regulation, market, and competitors’ behavior. Firms need to change the way they do business in order to survive or maintain their growth. Innovating business model (BM) can create competitive advantages and enhance firm performance. However, many companies fail to achieve expected outcomes in practice, mostly due to irreversible fundamental changes in key components of the company’s BM. This leads to more ambiguity, uncertainty, and risks associated with business performance. However, the relationship among BM Innovation, moderating factors, and the firm’s overall performance is by and large ignored in the current literature. In this study, we identified twenty moderating factors from our comprehensive literature review. We categorized these factors based on two criteria regarding the extent to which: the moderating factors can be controlled and managed by firms, and they are generic or specific changes to the firms. This leads to four moderation groups. The first group is BM implementation, which includes management support, employees’ commitment, employees’ skills, communication, detailed plan. The second group is called BM practices, which consists of BM tooling, BM experimentation, the scope of change, speed of change, degree of novelty. The third group is Firm characteristics, including firm size, age, and ownership. The last group is called Industry characteristics, which considers the industry sector, competitive intensity, industry life cycle, environmental dynamism, high-tech vs. low-tech industry. Through collecting data from 508 European small and medium-sized enterprises (SMEs) and using the structural equation modeling technique, the developed moderation model was examined. Results revealed that all factors highlighted through these four groups moderate the relation between BMI and firm performance significantly. Particularly, factors related to BM-Implementation and BM-Practices are more manageable and would potentially improve firm overall performance. We believe that this result is more important for researchers and practitioners since the possibility of working on factors in Firm characteristics and Industry characteristics groups are limited, and the firm can hardly control and manage them to improve the performance of BMI efforts.

Keywords: business model innovation, firm performance, implementation, moderation

Procedia PDF Downloads 89
302 [Keynote Talk]: Let Us Move to Ethical Finance: A Case Study of Takaful

Authors: Syed Ahmed Salman

Abstract:

Ethicality is essential in our daily activities, including personal and commercial activities. This is evidenced by referring to the historical development of the corporate governance and ethical guidelines. The first corporate governance guideline, i.e. Cadbury Report from U.K. focuses the responsibility of board members towards the shareholders only. Gradually, realising the need to take care of the society and community, stakeholders are now concerns of business entities. Consequently, later codes of corporate governance started extending the responsibility to the other stakeholders in addition to the shareholders. One prevailing corporate governance theory, i.e. stakeholder theory, has been widely used in the research to explore the effects of business entities on society. In addition, the Global Reporting Initiative (GRI) is the leading organisation which promotes social care from businesses for sustainable development. Conventionally, history shows that ethics is key to the long term success of businesses. Many organisations, societies, and regulators give full attention and consideration to ethics. Several countries have introduced ethical codes of conduct to direct trade activities. Similarly, Islam and other religions prohibit the practice of interest, uncertainty, and gambling because of its unethical nature. These prohibited practices are not at all good for the society, business, and any organisation especially as it is detrimental to the well-being of society. In order to avoid unethicality in the finance industry, Shari’ah scholars come out with the idea of Islamic finance which is free from the prohibited elements from the Islamic perspective. It can also be termed ethical finance. This paper highlights how Takaful as one of the Islamic finance products offers fair and just products to the contracting parties and the society. Takaful is framed based on ethical guidelines which are extracted from Shari’ah principles and divine sources such as the Quran and Sunnah. Takaful products have been widely offered all over the world, including in both Muslim and non-Muslim countries. It seems that it is gaining acceptance regardless of religion. This is evidence that Takaful is being accepted as an ethical financial product.

Keywords: ethics, insurance, Islamic finance, religion and takaful

Procedia PDF Downloads 240
301 Improved Functions For Runoff Coefficients And Smart Design Of Ditches & Biofilters For Effective Flow detention

Authors: Thomas Larm, Anna Wahlsten

Abstract:

An international literature study has been carried out for comparison of commonly used methods for the dimensioning of transport systems and stormwater facilities for flow detention. The focus of the literature study regarding the calculation of design flow and detention has been the widely used Rational method and its underlying parameters. The impact of chosen design parameters such as return time, rain intensity, runoff coefficient, and climate factor have been studied. The parameters used in the calculations have been analyzed regarding how they can be calculated and within what limits they can be used. Data used within different countries have been specified, e.g., recommended rainfall return times, estimated runoff times, and climate factors used for different cases and time periods. The literature study concluded that the determination of runoff coefficients is the most uncertain parameter that also affects the calculated flow and required detention volume the most. Proposals have been developed for new runoff coefficients, including a new proposed method with equations for calculating runoff coefficients as a function of return time (years) and rain intensity (l/s/ha), respectively. Suggestions have been made that it is recommended not to limit the use of the Rational Method to a specific catchment size, contrary to what many design manuals recommend, with references to this. The proposed relationships between return time or rain intensity and runoff coefficients need further investigation and to include the quantification of uncertainties. Examples of parameters that have not been considered are the influence on the runoff coefficients of different dimensioning rain durations and the degree of water saturation of green areas, which will be investigated further. The influence of climate effects and design rain on the dimensioning of the stormwater facilities grassed ditches and biofilters (bio retention systems) has been studied, focusing on flow detention capacity. We have investigated how the calculated runoff coefficients regarding climate effect and the influence of changed (increased) return time affect the inflow to and dimensioning of the stormwater facilities. We have developed a smart design of ditches and biofilters that results in both high treatment and flow detention effects and compared these with the effect from dry and wet ponds. Studies of biofilters have generally before focused on treatment of pollutants, but their effect on flow volume and how its flow detention capability can improve is only rarely studied. For both the new type of stormwater ditches and biofilters, it is required to be able to simulate their performance in a model under larger design rains and future climate, as these conditions cannot be tested in the field. The stormwater model StormTac Web has been used on case studies. The results showed that the new smart design of ditches and biofilters had similar flow detention capacity as dry and wet ponds for the same facility area.

Keywords: runoff coefficients, flow detention, smart design, biofilter, ditch

Procedia PDF Downloads 58
300 Analysis and Quantification of Historical Drought for Basin Wide Drought Preparedness

Authors: Joo-Heon Lee, Ho-Won Jang, Hyung-Won Cho, Tae-Woong Kim

Abstract:

Drought is a recurrent climatic feature that occurs in virtually every climatic zone around the world. Korea experiences the drought almost every year at the regional scale mainly during in the winter and spring seasons. Moreover, extremely severe droughts at a national scale also occurred at a frequency of six to seven years. Various drought indices had developed as tools to quantitatively monitor different types of droughts and are utilized in the field of drought analysis. Since drought is closely related with climatological and topographic characteristics of the drought prone areas, the basins where droughts are frequently occurred need separate drought preparedness and contingency plans. In this study, an analysis using statistical methods was carried out for the historical droughts occurred in the five major river basins in Korea so that drought characteristics can be quantitatively investigated. It was also aimed to provide information with which differentiated and customized drought preparedness plans can be established based on the basin level analysis results. Conventional methods which quantifies drought execute an evaluation by applying a various drought indices. However, the evaluation results for same drought event are different according to different analysis technique. Especially, evaluation of drought event differs depend on how we view the severity or duration of drought in the evaluation process. Therefore, it was intended to draw a drought history for the most severely affected five major river basins of Korea by investigating a magnitude of drought that can simultaneously consider severity, duration, and the damaged areas by applying drought run theory with the use of SPI (Standardized Precipitation Index) that can efficiently quantifies meteorological drought. Further, quantitative analysis for the historical extreme drought at various viewpoints such as average severity, duration, and magnitude of drought was attempted. At the same time, it was intended to quantitatively analyze the historical drought events by estimating the return period by derived SDF (severity-duration-frequency) curve for the five major river basins through parametric regional drought frequency analysis. Analysis results showed that the extremely severe drought years were in the years of 1962, 1988, 1994, and 2014 in the Han River basin. While, the extreme droughts were occurred in 1982 and 1988 in the Nakdong river basin, 1994 in the Geumg basin, 1988 and 1994 in Youngsan river basin, 1988, 1994, 1995, and 2000 in the Seomjin river basin. While, the extremely severe drought years at national level in the Korean Peninsula were occurred in 1988 and 1994. The most damaged drought were in 1981~1982 and 1994~1995 which lasted for longer than two years. The return period of the most severe drought at each river basin was turned out to be at a frequency of 50~100 years.

Keywords: drought magnitude, regional frequency analysis, SPI, SDF(severity-duration-frequency) curve

Procedia PDF Downloads 372
299 Need for Shariah Screening of Companies in Nigeria: Lessons from Other Jurisdictions

Authors: Aishat Abdul-Qadir Zubair

Abstract:

Background: The absence of Shari’ah screening methodology for companies in Nigeria has further engineered the uncertainty surrounding the acceptability of investing in certain companies by people professing the religion of Islam due to the nature of the activities carried out by these companies. There are some existing shariah screening indices in other jurisdictions whose criteria can be used to check if a company or business is shariah-compliant or not. Examples such as FTSE, DJIM, Standard and Poor to mention just a few. What these indices have tried to do is to ensure that there are benchmarks to check with before investing in companies that carry out mixed activities in their business, wherein some are halal and others may be haram. Purpose: There have been numerous studies on the need to adopt certain screening methodologies as well as call for new methods in screening companies for shariah compliance in order to suit the investments needs of Muslims in other jurisdictions. It is, however, unclear how suitable these methodologies will be to Nigeria. This paper, therefore, seeks to address this gap to consider an appropriate screening methodology to be employed in Nigeria, drawing from the experience of other jurisdictions. Methods: This study employs a triangulation of both quantitative and qualitative methods to analyze the need for Shari’ah screening of companies in Nigeria. The qualitative method is used by way of ijtihad, and this study tries to apply some Islamic Principles of Maqasid al-shari’ah as well as Qawaid al-Fiqiyyah to analyze activities of companies in order to ensure that they are indeed Shari’ah compliant. In addition, using the quantitative data gathered from the interview survey, the perspective of the investors with regards to the need for Shari’ah screening of companies in Nigeria is further analyzed. Results: The result of the study shows that there is a lack of awareness from the teeming Muslim population in Nigeria on the need for Shari’ah screening of companies in Nigeria. The result further shows that there is the need to take into cognizance the peculiar nature of company activities in Nigeria before any particular Shari’ah screening methodology is adopted and setting the necessary benchmarks. Conclusion and Implications: The study concludes that there is the need to ensure that the conscious Muslims in Nigeria screen companies for Shari’ah compliance so that they can easily identify the companies to invest in. The paper, therefore, recommends that the Nigerian government need to come up with a screening methodology that will suit the peculiar nature of companies in Nigeria. The study thus has a direct implication on the Investment regulatory bodies in Nigeria such as the Securities and Exchange Commission (SEC), Central Bank of Nigeria (CBN) as well as the investor Muslims.

Keywords: Shari'ah screening, Muslims, investors, companies

Procedia PDF Downloads 133
298 A Review Study on the Importance and Correlation of Crisis Literacy and Media Communications for Vulnerable Marginalized People During Crisis

Authors: Maryam Jabeen

Abstract:

In recent times, there has been a notable surge in attention towards diverse literacy concepts such as media literacy, information literacy, and digital literacy. These concepts have garnered escalating interest, spurring the emergence of novel approaches, particularly in the aftermath of the Covid-19 crisis. However, amidst discussions of crises, the domain of crisis literacy remains largely uncharted within academic exploration. Crisis literacy, also referred to as disaster literacy, denotes an individual's aptitude to not only comprehend but also effectively apply information, enabling well-informed decision-making and adherence to instructions about disaster mitigation, preparedness, response, and recovery. This theoretical and descriptive study seeks to transcend foundational literacy concepts, underscoring the urgency for an in-depth exploration of crisis literacy and its interplay with the realm of media communication. Given the profound impact of the pandemic experience and the looming uncertainty of potential future crises, there arises a pressing need to elevate crisis literacy, or disaster literacy, towards heightened autonomy and active involvement within the spheres of critical disaster preparedness, recovery initiatives, and media communication domains. This research paper is part of my ongoing Ph.D. research study, which explores on a broader level the Encoding and decoding of media communications in relation to crisis literacy. The primary objective of this research paper is to expound upon a descriptive, theoretical research endeavor delving into this domain. The emphasis lies in highlighting the paramount significance of media communications in literacy of crisis, coupled with an accentuated focus on its role in providing information to marginalized populations amidst crises. In conclusion, this research bridges the gap in crisis literacy correlation to media communications exploration, advocating for a comprehensive understanding of its dynamics and its symbiotic relationship with media communications. It intends to foster a heightened sense of crisis literacy, particularly within marginalized communities, catalyzing proactive participation in disaster preparedness, recovery processes, and adept media interactions.

Keywords: covid-19, crisis literacy, crisis, marginalized, media and communications, pandemic, vulnerable people

Procedia PDF Downloads 27
297 Implementation of Quality Function Development to Incorporate Customer’s Value in the Conceptual Design Stage of a Construction Projects

Authors: Ayedh Alqahtani

Abstract:

Many construction firms in Saudi Arabia dedicated to building projects agree that the most important factor in the real estate market is the value that they can give to their customer. These firms understand the value of their client in different ways. Value can be defined as the size of the building project in relationship to the cost or the design quality of the materials utilized in finish work or any other features of building rooms such as the bathroom. Value can also be understood as something suitable for the money the client is investing for the new property. A quality tool is required to support companies to achieve a solution for the building project and to understand and manage the customer’s needs. Quality Function Development (QFD) method will be able to play this role since the main difference between QFD and other conventional quality management tools is QFD a valuable and very flexible tool for design and taking into the account the VOC. Currently, organizations and agencies are seeking suitable models able to deal better with uncertainty, and that is flexible and easy to use. The primary aim of this research project is to incorporate customer’s requirements in the conceptual design of construction projects. Towards this goal, QFD is selected due to its capability to integrate the design requirements to meet the customer’s needs. To develop QFD, this research focused upon the contribution of the different (significantly weighted) input factors that represent the main variables influencing QFD and subsequent analysis of the techniques used to measure them. First of all, this research will review the literature to determine the current practice of QFD in construction projects. Then, the researcher will review the literature to define the current customers of residential projects and gather information on customers’ requirements for the design of the residential building. After that, qualitative survey research will be conducted to rank customer’s needs and provide the views of stakeholder practitioners about how these needs can affect their satisfy. Moreover, a qualitative focus group with the members of the design team will be conducted to determine the improvements level and technical details for the design of residential buildings. Finally, the QFD will be developed to establish the degree of significance of the design’s solution.

Keywords: quality function development, construction projects, Saudi Arabia, quality tools

Procedia PDF Downloads 92
296 The Basin Management Methodology for Integrated Water Resources Management and Development

Authors: Julio Jesus Salazar, Max Jesus De Lama

Abstract:

The challenges of water management are aggravated by global change, which implies high complexity and associated uncertainty; water management is difficult because water networks cross domains (natural, societal, and political), scales (space, time, jurisdictional, institutional, knowledge, etc.) and levels (area: patches to global; knowledge: a specific case to generalized principles). In this context, we need to apply natural and non-natural measures to manage water and soil. The Basin Management Methodology considers multifunctional measures of natural water retention and erosion control and soil formation to protect water resources and address the challenges related to the recovery or conservation of the ecosystem, as well as natural characteristics of water bodies, to improve the quantitative status of water bodies and reduce vulnerability to floods and droughts. This method of water management focuses on the positive impacts of the chemical and ecological status of water bodies, restoration of the functioning of the ecosystem and its natural services; thus, contributing to both adaptation and mitigation of climate change. This methodology was applied in 7 interventions in the sub-basin of the Shullcas River in Huancayo-Junín-Peru, obtaining great benefits in the framework of the participation of alliances of actors and integrated planning scenarios. To implement the methodology in the sub-basin of the Shullcas River, a process called Climate Smart Territories (CST) was used; with which the variables were characterized in a highly complex space. The diagnosis was then worked using risk management and adaptation to climate change. Finally, it was concluded with the selection of alternatives and projects of this type. Therefore, the CST approach and process face the challenges of climate change through integrated, systematic, interdisciplinary and collective responses at different scales that fit the needs of ecosystems and their services that are vital to human well-being. This methodology is now replicated at the level of the Mantaro river basin, improving with other initiatives that lead to the model of a resilient basin.

Keywords: climate-smart territories, climate change, ecosystem services, natural measures, Climate Smart Territories (CST) approach

Procedia PDF Downloads 111
295 An Analysis of the Role of Watchdog Civil Society Organisations in the Public Governance in Southern Africa: A study of South Africa and Zimbabwe

Authors: Julieth Gudo

Abstract:

The prevalence of corruption in African countries and persisting unsatisfactory distribution by governments of state resources among the citizens are clear indicators of a festering problem. Civil society organisations (CSOs) in Southern African countries, as citizen representatives, have been involved in challenging the ongoing corruption and poor governance in the public sector that have caused tensions between citizens and their governments. In doing so, civil society organisations demand accountability, transparency, and citizen participation in public governance. The problem is that CSOs’ role in challenging governments is not clearly defined in both law and literature. This uncertainty has resulted in an unsatisfying operating and legal environment for CSOs and a strained relationship between themselves and the governments. This paper examines civil society organisations' role in advancing good public governance in South Africa and Zimbabwe. The study will be conducted by means of a literature review and case studies. The state of public governance in Southern Africa will be discussed. The historical role of CSOs in the region of Southern Africa will be explored, followed by their role in public governance in contemporary South Africa and Zimbabwe. The relationship between state and civil society organisations will be examined. Furthermore, the legal frameworks that regulate and authoriseCSOs in their part in challenging poor governance in the public sector will be identified and discussed. Loopholes in such provisions will be identified, and measures that CSOs use to hold those responsible for poor governance accountable for their actions will be discussed, consequently closing the existing gap on the undefined role of CSOs in public governance in Southern Africa. The research demonstrates the need for an enabling operating environment through better cooperation, communication, and the relationship between governments and CSOs, the speedy and effective amendment of existing laws, and the introduction of legal provisions that give express authority to CSOs to challenge poor governance on the part of Southern African governments. Also critical is the enforcement of laws so that those responsible for poor governance and corruption in government are held accountable.

Keywords: civil society organisations, public governance, southern Africa, South Africa, zimbabwe

Procedia PDF Downloads 81
294 Standard Essential Patents for Artificial Intelligence Hardware and the Implications For Intellectual Property Rights

Authors: Wendy de Gomez

Abstract:

Standardization is a critical element in the ability of a society to reduce uncertainty, subjectivity, misrepresentation, and interpretation while simultaneously contributing to innovation. Technological standardization is critical to codify specific operationalization through legal instruments that provide rules of development, expectation, and use. In the current emerging technology landscape Artificial Intelligence (AI) hardware as a general use technology has seen incredible growth as evidenced from AI technology patents between 2012 and 2018 in the United States Patent Trademark Office (USPTO) AI dataset. However, as outlined in the 2023 United States Government National Standards Strategy for Critical and Emerging Technology the codification through standardization of emerging technologies such as AI has not kept pace with its actual technological proliferation. This gap has the potential to cause significant divergent possibilities for the downstream outcomes of AI in both the short and long term. This original empirical research provides an overview of the standardization efforts around AI in different geographies and provides a background to standardization law. It quantifies the longitudinal trend of Artificial Intelligence hardware patents through the USPTO AI dataset. It seeks evidence of existing Standard Essential Patents from these AI hardware patents through a text analysis of the Statement of patent history and the Field of the invention of these patents in Patent Vector and examines their determination as a Standard Essential Patent and their inclusion in existing AI technology standards across the four main AI standards bodies- European Telecommunications Standards Institute (ETSI); International Telecommunication Union (ITU)/ Telecommunication Standardization Sector (-T); Institute of Electrical and Electronics Engineers (IEEE); and the International Organization for Standardization (ISO). Once the analysis is complete the paper will discuss both the theoretical and operational implications of F/Rand Licensing Agreements for the owners of these Standard Essential Patents in the United States Court and Administrative system. It will conclude with an evaluation of how Standard Setting Organizations (SSOs) can work with SEP owners more effectively through various forms of Intellectual Property mechanisms such as patent pools.

Keywords: patents, artifical intelligence, standards, F/Rand agreements

Procedia PDF Downloads 33
293 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.

Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment

Procedia PDF Downloads 40
292 What Happens When We Try to Bridge the Science-Practice Gap? An Example from the Brazilian Native Vegetation Protection Law

Authors: Alice Brites, Gerd Sparovek, Jean Paul Metzger, Ricardo Rodrigues

Abstract:

The segregation between science and policy in decision making process hinders nature conservation efforts worldwide. Scientists have been criticized for not producing information that leads to effective solutions for environmental problems. In an attempt to bridge this gap between science and practice, we conducted a project aimed at supporting the implementation of the Brazilian Native Vegetation Protection Law (NVPL) implementation in São Paulo State (SP), Brazil. To do so, we conducted multiple open meetings with the stakeholders involved in this discussion. Throughout this process, we raised stakeholders' demands for scientific information and brought feedbacks about our findings. However, our main scientific advice was not taken into account during the NVPL implementation in SP. The NVPL has a mechanism that exempts landholders who converted native vegetation without offending the legislation in place at the time of the conversion from restoration requirements. We found out that there were no accurate spatialized data for native vegetation cover before the 1960s. Thus, the initial benchmark for the mechanism application should be the 1965 Brazilian Forest Act. Even so, SP kept the 1934 Brazilian Forest Act as the initial legal benchmark for the law application. This decision implies the use of a probabilistic native vegetation map that has uncertainty and subjectivity as its intrinsic characteristics, thus its use can lead to legal queries, corruption, and an unfair benefit application. But why this decision was made even after the scientific advice was vastly divulgated? We raised some possible reasons to explain it. First, the decision was made during a government transition, showing that circumstantial political events can overshadow scientific arguments. Second, the debate about the NVPL in SP was not pacified and powerful stakeholders could benefit from the confusion created by this decision. Finally, the native vegetation protection mechanism is a complex issue, with many technical aspects that can be hard to understand for a non-specialized courtroom, such as the one that made the final decision at SP. This example shows that science and decision-makers still have a long way ahead to improve their way to interact and that science needs to find its way to be heard above the political buzz.

Keywords: Brazil, forest act, science-based dialogue, science-policy interface

Procedia PDF Downloads 94
291 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.

Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control

Procedia PDF Downloads 134
290 Family Medicine Residents in End-of-Life Care

Authors: Goldie Lynn Diaz, Ma. Teresa Tricia G. Bautista, Elisabeth Engeljakob, Mary Glaze Rosal

Abstract:

Introduction: Residents are expected to convey unfavorable news, discuss prognoses, and relieve suffering, and address do-not-resuscitate orders, yet some report a lack of competence in providing this type of care. Recognizing this need, Family Medicine residency programs are incorporating end-of-life care from symptom and pain control, counseling, and humanistic qualities as core proficiencies in training. Objective: This study determined the competency of Family Medicine Residents from various institutions in Metro Manila on rendering care for the dying. Materials and Methods: Trainees completed a Palliative Care Evaluation tool to assess their degree of confidence in patient and family interactions, patient management, and attitudes towards hospice care. Results: Remarkably, only a small fraction of participants were confident in performing independent management of terminal delirium and dyspnea. Fewer than 30% of residents can do the following without supervision: discuss medication effects and patient wishes after death, coping with pain, vomiting and constipation, and reacting to limited patient decision-making capacity. Half of the respondents had confidence in supporting the patient or family member when they become upset. Majority expressed confidence in many end-of-life care skills if supervision, coaching and consultation will be provided. Most trainees believed that pain medication should be given as needed to terminally ill patients. There was also uncertainty as to the most appropriate person to make end-of-life decisions. These attitudes may be influenced by personal beliefs rooted in cultural upbringing as well as by personal experiences with death in the family, which may also affect their participation and confidence in caring for the dying. Conclusion: Enhancing the quality and quantity of end-of-life care experiences during residency with sufficient supervision and role modeling may lead to knowledge and skill improvement to ensure quality of care. Fostering bedside learning opportunities during residency is an appropriate venue for teaching interventions in end-of-life care education.

Keywords: end of life care, geriatrics, palliative care, residency training skill

Procedia PDF Downloads 232
289 Dynamic Network Approach to Air Traffic Management

Authors: Catia S. A. Sima, K. Bousson

Abstract:

Congestion in the Terminal Maneuvering Areas (TMAs) of larger airports impacts all aspects of air traffic flow, not only at national level but may also induce arrival delays at international level. Hence, there is a need to monitor appropriately the air traffic flow in TMAs so that efficient decisions may be taken to manage their occupancy rates. It would be desirable to physically increase the existing airspace to accommodate all existing demands, but this question is entirely utopian and, given this possibility, several studies and analyses have been developed over the past decades to meet the challenges that have arisen due to the dizzying expansion of the aeronautical industry. The main objective of the present paper is to propose concepts to manage and reduce the degree of uncertainty in the air traffic operations, maximizing the interest of all involved, ensuring a balance between demand and supply, and developing and/or adapting resources that enable a rapid and effective adaptation of measures to the current context and the consequent changes perceived in the aeronautical industry. A central task is to emphasize the increase in air traffic flow management capacity to the present day, taking into account not only a wide range of methodologies but also equipment and/or tools already available in the aeronautical industry. The efficient use of these resources is crucial as the human capacity for work is limited and the actors involved in all processes related to air traffic flow management are increasingly overloaded and, as a result, operational safety could be compromised. The methodology used to answer and/or develop the issues listed above is based on the advantages promoted by the application of Markov Chain principles that enable the construction of a simplified model of a dynamic network that describes the air traffic flow behavior anticipating their changes and eventual measures that could better address the impact of increased demand. Through this model, the proposed concepts are shown to have potentials to optimize the air traffic flow management combined with the operation of the existing resources at each moment and the circumstances found in each TMA, using historical data from the air traffic operations and specificities found in the aeronautical industry, namely in the Portuguese context.

Keywords: air traffic flow, terminal maneuvering area, TMA, air traffic management, ATM, Markov chains

Procedia PDF Downloads 98
288 Lamivudine Continuation/Tenofovir Add-on Adversely Affects Treatment Response among Lamivudine Non-Responder HIV-HBV Co-Infected Patients from Eastern India

Authors: Ananya Pal, Neelakshi Sarkar, Debraj Saha, Dipanwita Das, Subhashish Kamal Guha, Bibhuti Saha, Runu Chakravarty

Abstract:

Presently, tenofovir disoproxil fumurate (TDF) is the most effective anti-viral agent for the treatment of hepatitis B virus (HBV) in individuals co-infected with HIV and HBV as TDF has activity to suppress both wild-type and lamivudine (3TC)-resistant HBV. However, suboptimal response to TDF was reported in HIV-HBV co-infected individuals with prior 3TC therapy from different countries recently. The incidence of 3TC-resistant HBV strains is quite high in HIV-HBV co-infected patients experiencing long-term anti-retroviral therapy (ART) in eastern India. In spite of this risk, most of the patients with long-term 3TC treatment are continued with the same anti-viral agent in this country. Only a few have received TDF in addition to 3TC in the ART regimen since TDF has been available in India for the treatment of HIV-infected patients in 2012. In this preliminary study, we investigated the virologic and biochemical parameters among HIV-HBV co-infected patients who are non-responders to 3TC treatment during the continuation of 3TC or TDF add-on to 3TC in their ART regimen. Fifteen HIV-HBV co-infected patients who experienced long-term 3TC (mean duration months 36.87 ± 24.08 months) were identified with high HBV viremia ( > 20,000 IU/ml) or harbouring 3TC-resistant HBV. These patients receiving ART from School of Tropical Medicine Kolkata, the main ART centre in eastern India were followed-up semi-annually for next three visits. Different virologic parameters including quantification of plasma HBV load by real-time PCR, detection of hepatitis B e antigen (HBeAg) by commercial ELISA and anti-viral resistant mutations by sequencing were studied. During three follow-up among study subjects, 86%, 47%, and 43% had 3TC-mono-therapy (mean treatment-duration 41.54±18.84, 49.67±11.67, 54.17±12.37 months respectively) whereas 14%, 53%, and 57% experienced TDF in addition to 3TC (mean treatment duration 4.5±2.12, 16.56±11.06, and 23±4.07 months respectively). Mean CD4 cell-count in patients receiving 3TC was tended to be lower during third follow-up as compared to the first and the second [520.67±380.30 (1st), 454.8±196.90 (2nd), and 397.5±189.24 (3rd) cells/mm3) and similar trend was seen in patients experiencing TDF in addition to 3TC [334.5±330.218 (1st), 476.5±194.25 (2nd), and 461.17±269.89 (3rd) cells/mm3]. Serum HBV load was increased during successive follow-up of patients with 3TC-mono-therapy. Initiation of TDF lowered serum HBV-load among 3TC-non-responders at the time of second visit ( < 2,000 IU/ml), interestingly during third follow-up, mean HBV viremia increased >1 log IU/ml (mean 3.56±2.84 log IU/ml). Persistence of 3TC-resistant double and triple mutations was also observed in both the treatment regimens. Mean serum alanine aminotransferase remained elevated in these patients during this follow-up study. Persistence of high HBV viraemia and 3TC-resistant mutation in HBV during the continuation of 3TC might lead to major public health threat in India. The inclusion of TDF in the ART regimen of 3TC non-responder HIV-HBV co-infected patients showed adverse treatment response in terms of virologic and biochemical parameters. Therefore, serious attention is necessary for proper management of long-term 3TC experienced HIV-HBV co-infected patients with high HBV viraemia or 3TC-resistant HBV mutants in India.

Keywords: HBV, HIV, TDF, 3TC-resistant

Procedia PDF Downloads 323
287 Fuzzy Decision Making to the Construction Project Management: Glass Facade Selection

Authors: Katarina Rogulj, Ivana Racetin, Jelena Kilic

Abstract:

In this study, the fuzzy logic approach (FLA) was developed for construction project management (CPM) under uncertainty and duality. The focus was on decision making in selecting the type of the glass facade for a residential-commercial building in the main design. The adoption of fuzzy sets was capable of reflecting construction managers’ reliability level over subjective judgments, and thus the robustness of the system can be achieved. An α-cuts method was utilized for discretizing the fuzzy sets in FLA. This method can communicate all uncertain information in the optimization process, taking into account the values of this information. Furthermore, FLA provides in-depth analyses of diverse policy scenarios that are related to various levels of economic aspects when it comes to the construction projects' valid decision making. The developed approach is applied to CPM to demonstrate its applicability. Analyzing the materials of glass facades, variants were defined. The development of the FLA for the CPM included relevant construction projec'ts stakeholders that were involved in the criteria definition to evaluate each variant. Using fuzzy Decision-Making Trial and Evaluation Laboratory Method (DEMATEL) comparison of the glass facade was conducted. This way, a rank, according to the priorities for inclusion into the main design, of variants is obtained. The concept was tested on a residential-commercial building in the city of Rijeka, Croatia. The newly developed methodology was then compared with the existing one. The aim of the research was to define an approach that will improve current judgments and decisions when it comes to the material selection of buildings facade as one of the most important architectural and engineering tasks in the main design. The advantage of the new methodology compared to the old one is that it includes the subjective side of the managers’ decisions, as an inevitable factor in each decision making. The proposed approach can help construction projects managers to identify the desired type of glass facade according to their preference and practical conditions, as well as facilitate in-depth analyses of tradeoffs between economic efficiency and architectural design.

Keywords: construction projects management, DEMATEL, fuzzy logic approach, glass façade selection

Procedia PDF Downloads 101
286 Assessment of a Rapid Detection Sensor of Faecal Pollution in Freshwater

Authors: Ciprian Briciu-Burghina, Brendan Heery, Dermot Brabazon, Fiona Regan

Abstract:

Good quality bathing water is a highly desirable natural resource which can provide major economic, social, and environmental benefits. Both in Ireland and Europe, such water bodies are managed under the European Directive for the management of bathing water quality (BWD). The BWD aims mainly: (i) to improve health protection for bathers by introducing stricter standards for faecal pollution assessment (E. coli, enterococci), (ii) to establish a more pro-active approach to the assessment of possible pollution risks and the management of bathing waters, and (iii) to increase public involvement and dissemination of information to the general public. Standard methods for E. coli and enterococci quantification rely on cultivation of the target organism which requires long incubation periods (from 18h to a few days). This is not ideal when immediate action is required for risk mitigation. Municipalities that oversee the bathing water quality and deploy appropriate signage have to wait for laboratory results. During this time, bathers can be exposed to pollution events and health risks. Although forecasting tools exist, they are site specific and as consequence extensive historical data is required to be effective. Another approach for early detection of faecal pollution is the use of marker enzymes. β-glucuronidase (GUS) is a widely accepted biomarker for E. coli detection in microbiological water quality control. GUS assay is particularly attractive as they are rapid, less than 4 h, easy to perform and they do not require specialised training. A method for on-site detection of GUS from environmental samples in less than 75 min was previously demonstrated. In this study, the capability of ColiSense as an early warning system for faecal pollution in freshwater is assessed. The system successfully detected GUS activity in all of the 45 freshwater samples tested. GUS activity was found to correlate linearly with E. coli (r2=0.53, N=45, p < 0.001) and enterococci (r2=0.66, N=45, p < 0.001) Although GUS is a marker for E. coli, a better correlation was obtained for enterococci. For this study water samples were collected from 5 rivers in the Dublin area over 1 month. This suggests a high diversity of pollution sources (agricultural, industrial, etc) as well as point and diffuse pollution sources were captured in the sample size. Such variety in the source of E. coli can account for different GUS activities/culturable cell and different ratios of viable but not culturable to viable culturable bacteria. A previously developed protocol for the recovery and detection of E. coli was coupled with a miniaturised fluorometer (ColiSense) and the system was assessed for the rapid detection FIB in freshwater samples. Further work will be carried out to evaluate the system’s performance on seawater samples.

Keywords: faecal pollution, β-glucuronidase (GUS), bathing water, E. coli

Procedia PDF Downloads 244
285 Application of Thermal Dimensioning Tools to Consider Different Strategies for the Disposal of High-Heat-Generating Waste

Authors: David Holton, Michelle Dickinson, Giovanni Carta

Abstract:

The principle of geological disposal is to isolate higher-activity radioactive wastes deep inside a suitable rock formation to ensure that no harmful quantities of radioactivity reach the surface environment. To achieve this, wastes will be placed in an engineered underground containment facility – the geological disposal facility (GDF) – which will be designed so that natural and man-made barriers work together to minimise the escape of radioactivity. Internationally, various multi-barrier concepts have been developed for the disposal of higher-activity radioactive wastes. High-heat-generating wastes (HLW, spent fuel and Pu) provide a number of different technical challenges to those associated with the disposal of low-heat-generating waste. Thermal management of the disposal system must be taken into consideration in GDF design; temperature constraints might apply to the wasteform, container, buffer and host rock. Of these, the temperature limit placed on the buffer component of the engineered barrier system (EBS) can be the most constraining factor. The heat must therefore be managed such that the properties of the buffer are not compromised to the extent that it cannot deliver the required level of safety. The maximum temperature of a buffer surrounding a container at the centre of a fixed array of heat-generating sources, arises due to heat diffusing from neighbouring heat-generating wastes, incrementally contributing to the temperature of the EBS. A range of strategies can be employed for managing heat in a GDF, including the spatial arrangements or patterns of those containers; different geometrical configurations can influence the overall thermal density in a disposal facility (or area within a facility) and therefore the maximum buffer temperature. A semi-analytical thermal dimensioning tool and methodology have been applied at a generic stage to explore a range of strategies to manage the disposal of high-heat-generating waste. A number of examples, including different geometrical layouts and chequer-boarding, have been illustrated to demonstrate how these tools can be used to consider safety margins and inform strategic disposal options when faced with uncertainty, at a generic stage of the development of a GDF.

Keywords: buffer, geological disposal facility, high-heat-generating waste, spent fuel

Procedia PDF Downloads 244
284 Covid-19 Associated Stress and Coping Strategies

Authors: Bar Shapira-Youngster, Sima Amram-Vaknin, Yuliya Lipshits-Braziler

Abstract:

The study examined how 811 Israelis experienced and coped with the COVID-19 lockdown. Stress, uncertainty, and loss of control were reported as common emotional experiences. Two main difficulties were reported: Loneliness and health and emotional concerns. Frequent explanations for the virus's emergence were: scientific or faith reasoning. The most prevalent coping strategies were distraction activities and acceptance. Reducing the use of maladaptive coping strategies has important implications for mental health outcomes. Objectives: COVID-19 has been recognized as a collective, continuous traumatic stressor. The present study examined how individuals experienced, perceived, and coped with this traumatic event during the lockdown in Israel in April 2020. Method: 811 Israelis (71.3% were women; mean age 43.7, SD=13.3)completed an online semi-structured questionnaire consisting two sections: In the first section, participants were asked to report background information. In the second section, they were asked to answer 8 open-ended questions about their experience, perception, and coping with the covid-19 lockdown. Participation was voluntary, and anonymity was assured, they were not offered compensation of any kind. The data were subjected to qualitative content analysis that seeks to classify the participants` answers into an effective number of categories that represent similar meanings. Our content analysis of participants’ answers extended far beyond simple word counts; our objective was to try to identify recurrent categories that characterized participants’ responses to each question. We sought to ensure that the categories regarding the different questions are as mutually exclusive and exhaustive as possible. To ensure robust analysis, the data were initially analyzed by the first author, and a second opinion was then sought from research colleagues. Contribution: The present research expands our knowledge of individuals' experiences, perceptions, and coping mechanisms with continuous traumatic events. Reducing the use of maladaptive coping strategies has important implications for mental health outcomes.

Keywords: Covid-19, emotional distress, coping, continuous traumatic event

Procedia PDF Downloads 97
283 Identifying Factors of Wellbeing in Russian Orphans

Authors: Alexandra Telitsyna, Galina Semya, Elvira Garifulina

Abstract:

Introduction: Starting from 2012 Russia conducts deinstitutionalization policy and now the main indicator of success is the number of children living in institutions. Active family placement process has resulted in residents of the institution now mainly consists of adolescents with behavioral and emotional problems, children with disabilities and groups of siblings. Purpose of science research: The purpose of science research is to identify factors for child’s wellbeing while temporary stay in an orphanage and the subjective assessment of children's level of well-being (psychological well-being). Methods: The data used for this project was collected by the questionnaire of 72 indicators, a tool for monitoring the behavior of children and caregivers, an additional questionnaire for children; well-being assessment questionnaire containing 10 scales for three age groups from preschool to older adolescents. In 2016-2018, the research was conducted in 1873 institution in 85 regions of Russia. In each region a team of academics, specialists from Non-profits, independent experts was created. Training was conducted for team members through a series of webinars prior to undertaking the assessment. The results: To ensure the well-being of the children, the following conditions are necessary: 1- Life of children in institution is organised according to the principles of family care (including the creation of conditions for attachment to be formed); 2- Contribution to find family-based placement for children (including reintegration into the primary family); 3- Work with parents of children, who are placed in an organization at the request of parents; 4- Children attend schools according to their needs; 5- Training of staff and volunteers; 6- Special environment and services for children with special needs and children with disabilities; 7- Cooperation with NGOs; 8 - Openness and accessibility of the organization. Conclusion: A study of the psychological well-being of children showed that the most emotionally stressful for children were questions about the presence and frequency of contact with relatives, and the level of well-being is higher in the presence of a trusted adult and respect for rights. The greatest contribution to the trouble is made by the time the child is in the orphanage, the lack of contact with parents and relatives, the uncertainty of the future.

Keywords: identifying factors, orphans, Russia, wellbeing

Procedia PDF Downloads 104
282 A Laser Instrument Rapid-E+ for Real-Time Measurements of Airborne Bioaerosols Such as Bacteria, Fungi, and Pollen

Authors: Minghui Zhang, Sirine Fkaier, Sabri Fernana, Svetlana Kiseleva, Denis Kiselev

Abstract:

The real-time identification of bacteria and fungi is difficult because they emit much weaker signals than pollen. In 2020, Plair developed Rapid-E+, which extends abilities of Rapid-E to detect smaller bioaerosols such as bacteria and fungal spores with diameters down to 0.3 µm, while keeping the similar or even better capability for measurements of large bioaerosols like pollen. Rapid-E+ enables simultaneous measurements of (1) time-resolved, polarization and angle dependent Mie scattering patterns, (2) fluorescence spectra resolved in 16 channels, and (3) fluorescence lifetime of individual particles. Moreover, (4) it provides 2D Mie scattering images which give the full information on particle morphology. The parameters of every single bioaerosol aspired into the instrument are subsequently analysed by machine learning. Firstly, pure species of microbes, e.g., Bacillus subtilis (a species of bacteria), and Penicillium chrysogenum (a species of fungal spores), were aerosolized in a bioaerosol chamber for Rapid-E+ training. Afterwards, we tested microbes under different concentrations. We used several steps of data analysis to classify and identify microbes. All single particles were analysed by the parameters of light scattering and fluorescence in the following steps. (1) They were treated with a smart filter block to get rid of non-microbes. (2) By classification algorithm, we verified the filtered particles were microbes based on the calibration data. (3) The probability threshold (defined by the user) step provides the probability of being microbes ranging from 0 to 100%. We demonstrate how Rapid-E+ identified simultaneously microbes based on the results of Bacillus subtilis (bacteria) and Penicillium chrysogenum (fungal spores). By using machine learning, Rapid-E+ achieved identification precision of 99% against the background. The further classification suggests the precision of 87% and 89% for Bacillus subtilis and Penicillium chrysogenum, respectively. The developed algorithm was subsequently used to evaluate the performance of microbe classification and quantification in real-time. The bacteria and fungi were aerosolized again in the chamber with different concentrations. Rapid-E+ can classify different types of microbes and then quantify them in real-time. Rapid-E+ enables classifying different types of microbes and quantifying them in real-time. Rapid-E+ can identify pollen down to species with similar or even better performance than the previous version (Rapid-E). Therefore, Rapid-E+ is an all-in-one instrument which classifies and quantifies not only pollen, but also bacteria and fungi. Based on the machine learning platform, the user can further develop proprietary algorithms for specific microbes (e.g., virus aerosols) and other aerosols (e.g., combustion-related particles that contain polycyclic aromatic hydrocarbons).

Keywords: bioaerosols, laser-induced fluorescence, Mie-scattering, microorganisms

Procedia PDF Downloads 60
281 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach

Authors: Jiaxin Chen

Abstract:

Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.

Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification

Procedia PDF Downloads 61
280 Climate Related Financial Risk on Automobile Industry and the Impact to the Financial Institutions

Authors: Mahalakshmi Vivekanandan S.

Abstract:

As per the recent changes happening in the global policies, climate-related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate-related changes can often happen and lead to risk and a lot of uncertainty, but needs to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed so that the financial institutions can plan to mitigate it. Climate-related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and other risk types. And the models required to compute this has to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out the suggestion that the climate-related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, the author presents a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios and how the different transition risks affect the risk associated with the different parties. This research paper delves into the topic of the increase in the concentration of greenhouse gases that in turn cause global warming. It then considers the various scenarios of having the different risk drivers impacting the Credit and market risk of an institution by understanding the transmission channels and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: the automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II Capital calculations and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.

Keywords: capital calculation, climate risk, credit risk, pillar ii risk, scenario modeling

Procedia PDF Downloads 99