Search results for: aleatory uncertainty
259 Kriging-Based Global Optimization Method for Bluff Body Drag Reduction
Authors: Bingxi Huang, Yiqing Li, Marek Morzynski, Bernd R. Noack
Abstract:
We propose a Kriging-based global optimization method for active flow control with multiple actuation parameters. This method is designed to converge quickly and avoid getting trapped into local minima. We follow the model-free explorative gradient method (EGM) to alternate between explorative and exploitive steps. This facilitates a convergence similar to a gradient-based method and the parallel exploration of potentially better minima. In contrast to EGM, both kinds of steps are performed with Kriging surrogate model from the available data. The explorative step maximizes the expected improvement, i.e., favors regions of large uncertainty. The exploitive step identifies the best location of the cost function from the Kriging surrogate model for a subsequent weight-biased linear-gradient descent search method. To verify the effectiveness and robustness of the improved Kriging-based optimization method, we have examined several comparative test problems of varying dimensions with limited evaluation budgets. The results show that the proposed algorithm significantly outperforms some model-free optimization algorithms like genetic algorithm and differential evolution algorithm with a quicker convergence for a given budget. We have also performed direct numerical simulations of the fluidic pinball (N. Deng et al. 2020 J. Fluid Mech.) on three circular cylinders in equilateral-triangular arrangement immersed in an incoming flow at Re=100. The optimal cylinder rotations lead to 44.0% net drag power saving with 85.8% drag reduction and 41.8% actuation power. The optimal results for active flow control based on this configuration have achieved boat-tailing mechanism by employing Coanda forcing and wake stabilization by delaying separation and minimizing the wake region.Keywords: direct numerical simulations, flow control, kriging, stochastic optimization, wake stabilization
Procedia PDF Downloads 107258 Prediction of Ionic Liquid Densities Using a Corresponding State Correlation
Authors: Khashayar Nasrifar
Abstract:
Ionic liquids (ILs) exhibit particular properties exemplified by extremely low vapor pressure and high thermal stability. The properties of ILs can be tailored by proper selection of cations and anions. As such, ILs are appealing as potential solvents to substitute traditional solvents with high vapor pressure. One of the IL properties required in chemical and process design is density. In developing corresponding state liquid density correlations, scaling hypothesis is often used. The hypothesis expresses the temperature dependence of saturated liquid densities near the vapor-liquid critical point as a function of reduced temperature. Extending the temperature dependence, several successful correlations were developed to accurately correlate the densities of normal liquids from the triple point to a critical point. Applying mixing rules, the liquid density correlations are extended to liquid mixtures as well. ILs are not molecular liquids, and they are not classified among normal liquids either. Also, ILs are often used where the condition is far from equilibrium. Nevertheless, in calculating the properties of ILs, the use of corresponding state correlations would be useful if no experimental data were available. With well-known generalized saturated liquid density correlations, the accuracy in predicting the density of ILs is not that good. An average error of 4-5% should be expected. In this work, a data bank was compiled. A simplified and concise corresponding state saturated liquid density correlation is proposed by phenomena-logically modifying reduced temperature using the temperature-dependence for an interacting parameter of the Soave-Redlich-Kwong equation of state. This modification improves the temperature dependence of the developed correlation. Parametrization was next performed to optimize the three global parameters of the correlation. The correlation was then applied to the ILs in our data bank with satisfactory predictions. The correlation of IL density applied at 0.1 MPa and was tested with an average uncertainty of around 2%. No adjustable parameter was used. The critical temperature, critical volume, and acentric factor were all required. Methods to extend the predictions to higher pressures (200 MPa) were also devised. Compared to other methods, this correlation was found more accurate. This work also presents the chronological order of developing such correlations dealing with ILs. The pros and cons are also expressed.Keywords: correlation, corresponding state principle, ionic liquid, density
Procedia PDF Downloads 128257 Synergistic Anti-Proliferation Effect of PLK-1 Inhibitor and Livistona Chinensis Fruit Extracts on Lung Adenocarcinoma A549 Cells
Authors: Min-Chien Su, Tzu-Hsuan Hsu, Guan-Xuan Wu, Shyh-Ming Kuo
Abstract:
Lung cancer is one of the clinically challenging malignant diseases worldwide. For efficient therapeutics in cancer, combination therapy has developed to acquire a better outcome. PLK-1 was one of the major factors affecting cell mitosis in cancer cells, its inhibitor Bi6727 was proven effective in treating several different cancers namely oral cancer, colon cancer and lung cancer. Despite its low toxicity toward normal cells compared to traditional chemotherapy, it is still yet to be evaluated in detail. Livistona Chinensis (LC) is a Chinese herb that used as a traditional prescription to treat lung cancer. Due to the uncertainty of the efficacy of LC, we utilized a water extraction method to extract the Livistona Chinensis and then lyophilized into powder for further study. In this study we investigated the antiproliferation activities of Bi6727 and LC extracts (LCE) on A549 non-small lung cancer cells. The IC50 of Bi6727 and LCE on A549 are 60 nM and 0.8 mg/mL, respectively. The fluorescent staining images shown nucleolus damage in cells treated with Bi6727 and mitochondrial damage after treated with LCE. A549 cells treated with Bi6727 and LCE showed increased expression of Bax, Caspase-3 and Caspase-9 proteins from Western blot assay. LCE also inhibited A549 cells growth keeping cells at G2-M phase from cell cycle assay. Apoptosis assay results showed that LCE induced late apoptosis of A549 cells. JC-1 assay showed that the mitochondria damaged at the LCE concentration of 0.4 mg/mL. In our preliminary anti-proliferation test of combined LCE and Bi-6727 on A549 cells, we found a dramatically decrease in proliferation after treated with LCE first for 24-h and then Bi-6727 for extra 24-h. This was an important finding regarding synergistic anti-proliferation effect of these drugs, However, the usage, the application sequence of LCE and Bi-6727 on A549 cells and their related mechanisms still need to be evaluated. In summary, the drugs exerted anti-proliferation effect on A549 cells independently. We hopefully combine the usage of these two drugs will bring a different and potential outcome in treating lung cancer.Keywords: anti-proliferation, A549, Livistona Chinensis fruit extracts, PLK-1 inhibitor
Procedia PDF Downloads 141256 Development of Multi-Leaf Collimator-Based Isocenter Verification Tool Using Electrical Portal Imaging Device for Stereotactic Radiosurgery
Authors: Panatda Intanin, Sangutid Thongsawad, Chirapha Tannanonta, Todsaporn Fuangrod
Abstract:
Stereotactic radiosurgery (SRS) is a highly precision delivery technique that requires comprehensive quality assurance (QA) tests prior to treatment delivery. An isocenter of delivery beam plays a critical role that affect the treatment accuracy. The uncertainty of isocenter is traditionally accessed using circular cone equipment, Winston-Lutz (WL) phantom and film. This technique is considered time consuming and highly dependent on the observer. In this work, the development of multileaf collimator (MLC)-based isocenter verification tool using electronic portal imaging device (EPID) was proposed and evaluated. A mechanical isocenter alignment with ball bearing diameter 5 mm and circular cone diameter 10 mm fixed to gantry head defines the radiation field was set as the conventional WL test method. The conventional setup was to compare to the proposed setup; using MLC (10 x 10 mm) to define the radiation filed instead of cone. This represents more realistic delivery field than using circular cone equipment. The acquisition from electronic portal imaging device (EPID) and radiographic film were performed in both experiments. The gantry angles were set as following: 0°, 90°, 180° and 270°. A software tool was in-house developed using MATLAB/SIMULINK programming to determine the centroid of radiation field and shadow of WL phantom automatically. This presents higher accuracy than manual measurement. The deviation between centroid of both cone-based and MLC-based WL tests were quantified. To compare between film and EPID image, the deviation for all gantry angle was 0.26±0.19mm and 0.43±0.30 for cone-based and MLC-based WL tests. For the absolute deviation calculation on EPID images between cone and MLC-based WL test was 0.59±0.28 mm and the absolute deviation on film images was 0.14±0.13 mm. Therefore, the MLC-based isocenter verification using EPID present high sensitivity tool for SRS QA.Keywords: isocenter verification, quality assurance, EPID, SRS
Procedia PDF Downloads 152255 Readiness of Iran’s Insurance Industry Salesforce to Accept Changing to Become Islamic Personal Financial Planners
Authors: Pedram Saadati, Zahra Nazari
Abstract:
Today, the role and importance of financial technology businesses in Iran have increased significantly. Although, in Iran, there is no Islamic or non-Islamic personal financial planning field of study in the universities or educational centers, the profession of personal financial planning is not defined, and there is no software introduced in this regard for advisors or consumers. The largest sales network of financial services in Iran belongs to the insurance industry, and there is an untapped market for international companies in Iran that can contribute to 130 thousand representatives in the insurance industry and 28 million families by providing training and personal financial advisory software. To the best of the author's knowledge, despite the lack of previous internal studies in this field, the present study investigates the level of readiness of the salesforce of the insurance industry to accept this career and its technology. The statistical population of the research is made up of managers, insurance sales representatives, assistants and heads of sales departments of insurance companies. An 18-minute video was prepared that introduced and taught the job of Islamic personal financial planning and explained its difference from its non-Islamic model. This video was provided to the respondents. The data collection tool was a research-made questionnaire. To investigate the factors affecting technology acceptance and job change, independent T descriptive statistics and Pearson correlation were used, and Friedman's test was used to rank the effective factors. The results indicate the mental perception and very positive attitude of the insurance industry activists towards the usefulness of this job and its technology, and the studied sample confirmed the intention of training in this knowledge. Based on research results, the change in the customer's attitude towards the insurance advisor and the possibility of increasing income are considered as the reasons for accepting. However, Restrictions on using investment opportunities due to Islamic financial services laws and the uncertainty of the position of the central insurance in this regard are considered as the most important obstacles.Keywords: fintech, insurance, personal financial planning, wealth management
Procedia PDF Downloads 49254 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction
Abstract:
Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.
Procedia PDF Downloads 89253 Literature Review on the Barriers to Access Credit for Small Agricultural Producers and Policies to Mitigate Them in Developing Countries
Authors: Margarita Gáfaro, Karelys Guzmán, Paola Poveda
Abstract:
This paper establishes the theoretical aspects that explain the barriers to accessing credit for small agricultural producers in developing countries and identifies successful policy experiences to mitigate them. We will test two hypotheses. The first one is that information asymmetries, high transaction costs and high-risk exposure limit the supply of credit to small agricultural producers in developing countries. The second hypothesis is that low levels of financial education and productivity and high uncertainty about the returns of agricultural activity limit the demand for credit. To test these hypotheses, a review of the theoretical and empirical literature on access to rural credit in developing countries will be carried out. The first part of this review focuses on theoretical models that incorporate information asymmetries in the credit market and analyzes the interaction between these asymmetries and the characteristics of the agricultural sector in developing countries. Some of the characteristics we will focus on are the absence of collateral, the underdevelopment of the judicial systems and insurance markets, and the high dependence on climatic factors of production technologies. The second part of this review focuses on the determinants of credit demand by small agricultural producers, including the profitability of productive projects, security conditions, risk aversion or loss, financial education, and cognitive biases, among others. There are policies that focus on resolving these supply and demand constraints and managing to improve credit access. Therefore, another objective of this paper is to present a review of effective policies that have promoted access to credit for smallholders in the world. For this, information available in policy documents will be collected. This information will be complemented by interviews with officials in charge of the design and execution of these policies in a subset of selected countries. The information collected will be analyzed in light of the conceptual framework proposed in the first two parts of this section. The barriers to access to credit that each policy attempts to resolve and the factors that could explain its effectiveness will be identified.Keywords: agricultural economics, credit access, smallholder, developing countries
Procedia PDF Downloads 69252 Characters of Developing Commercial Employment Sub-Centres and Employment Density in Ahmedabad City
Authors: Bhaumik Patel, Amit Gotecha
Abstract:
Commercial centres of different hierarchy and sizes play a vital role in the growth and development of the city. Economic uncertainty and demand for space leads to more urban sprawl and emerging more commercial spaces. The study was focused on the understanding of various indicators affecting the commercial development that can help to solve many issues related to commercial urban development and can guide for future employment growth centre development, Accessibility, Infrastructure, Planning and development regulations and Market forces. The aim of the study was to review characteristics and identifying employment density of Commercial Employment Sub-centres by achieving objectives Understanding various employment sub-centres, Identifying characteristics and deriving behaviour of employment densities and Evaluating and comparing employment sub-centres for the Ahmedabad city. Commercial employment sub-centres one in old city (Kalupur), second in highly developed commercial (C.G.road-Ashram road) and third in the latest developing commercial area (Prahladnagar) were identified by distance from city centre, Land use diversity, Access to Major roads and Public transport, Population density in proximity, Complimentary land uses in proximity and Land price. Commercial activities were categorised into retail, wholesale and service sector and sub categorised into various activities. From the study, Time period of establishment of the unit is a critical parameter for commercial activity, building height, and land-use diversity. Employment diversity is also one parameter for the commercial centre. The old city has retail, wholesale and trading and higher commercial density concerning units and employment both. Prahladnagar area functioned as commercial due to market pressure and developed as more units rather than a requirement. Employment density is higher in the centre of the city, as far as distance increases from city centre employment density and unit density decreases. Characters of influencing employment density and unit density are distance from city centre, development type, establishment time period, building density, unit density, public transport accessibility and road connectivity.Keywords: commercial employment sub-centres, employment density, employment diversity, unit density
Procedia PDF Downloads 142251 Business Model Innovation and Firm Performance: Exploring Moderation Effects
Authors: Mohammad-Ali Latifi, Harry Bouwman
Abstract:
Changes in the business environment accelerated dramatically over the last decades as a result of changes in technology, regulation, market, and competitors’ behavior. Firms need to change the way they do business in order to survive or maintain their growth. Innovating business model (BM) can create competitive advantages and enhance firm performance. However, many companies fail to achieve expected outcomes in practice, mostly due to irreversible fundamental changes in key components of the company’s BM. This leads to more ambiguity, uncertainty, and risks associated with business performance. However, the relationship among BM Innovation, moderating factors, and the firm’s overall performance is by and large ignored in the current literature. In this study, we identified twenty moderating factors from our comprehensive literature review. We categorized these factors based on two criteria regarding the extent to which: the moderating factors can be controlled and managed by firms, and they are generic or specific changes to the firms. This leads to four moderation groups. The first group is BM implementation, which includes management support, employees’ commitment, employees’ skills, communication, detailed plan. The second group is called BM practices, which consists of BM tooling, BM experimentation, the scope of change, speed of change, degree of novelty. The third group is Firm characteristics, including firm size, age, and ownership. The last group is called Industry characteristics, which considers the industry sector, competitive intensity, industry life cycle, environmental dynamism, high-tech vs. low-tech industry. Through collecting data from 508 European small and medium-sized enterprises (SMEs) and using the structural equation modeling technique, the developed moderation model was examined. Results revealed that all factors highlighted through these four groups moderate the relation between BMI and firm performance significantly. Particularly, factors related to BM-Implementation and BM-Practices are more manageable and would potentially improve firm overall performance. We believe that this result is more important for researchers and practitioners since the possibility of working on factors in Firm characteristics and Industry characteristics groups are limited, and the firm can hardly control and manage them to improve the performance of BMI efforts.Keywords: business model innovation, firm performance, implementation, moderation
Procedia PDF Downloads 120250 [Keynote Talk]: Let Us Move to Ethical Finance: A Case Study of Takaful
Authors: Syed Ahmed Salman
Abstract:
Ethicality is essential in our daily activities, including personal and commercial activities. This is evidenced by referring to the historical development of the corporate governance and ethical guidelines. The first corporate governance guideline, i.e. Cadbury Report from U.K. focuses the responsibility of board members towards the shareholders only. Gradually, realising the need to take care of the society and community, stakeholders are now concerns of business entities. Consequently, later codes of corporate governance started extending the responsibility to the other stakeholders in addition to the shareholders. One prevailing corporate governance theory, i.e. stakeholder theory, has been widely used in the research to explore the effects of business entities on society. In addition, the Global Reporting Initiative (GRI) is the leading organisation which promotes social care from businesses for sustainable development. Conventionally, history shows that ethics is key to the long term success of businesses. Many organisations, societies, and regulators give full attention and consideration to ethics. Several countries have introduced ethical codes of conduct to direct trade activities. Similarly, Islam and other religions prohibit the practice of interest, uncertainty, and gambling because of its unethical nature. These prohibited practices are not at all good for the society, business, and any organisation especially as it is detrimental to the well-being of society. In order to avoid unethicality in the finance industry, Shari’ah scholars come out with the idea of Islamic finance which is free from the prohibited elements from the Islamic perspective. It can also be termed ethical finance. This paper highlights how Takaful as one of the Islamic finance products offers fair and just products to the contracting parties and the society. Takaful is framed based on ethical guidelines which are extracted from Shari’ah principles and divine sources such as the Quran and Sunnah. Takaful products have been widely offered all over the world, including in both Muslim and non-Muslim countries. It seems that it is gaining acceptance regardless of religion. This is evidence that Takaful is being accepted as an ethical financial product.Keywords: ethics, insurance, Islamic finance, religion and takaful
Procedia PDF Downloads 273249 Need for Shariah Screening of Companies in Nigeria: Lessons from Other Jurisdictions
Authors: Aishat Abdul-Qadir Zubair
Abstract:
Background: The absence of Shari’ah screening methodology for companies in Nigeria has further engineered the uncertainty surrounding the acceptability of investing in certain companies by people professing the religion of Islam due to the nature of the activities carried out by these companies. There are some existing shariah screening indices in other jurisdictions whose criteria can be used to check if a company or business is shariah-compliant or not. Examples such as FTSE, DJIM, Standard and Poor to mention just a few. What these indices have tried to do is to ensure that there are benchmarks to check with before investing in companies that carry out mixed activities in their business, wherein some are halal and others may be haram. Purpose: There have been numerous studies on the need to adopt certain screening methodologies as well as call for new methods in screening companies for shariah compliance in order to suit the investments needs of Muslims in other jurisdictions. It is, however, unclear how suitable these methodologies will be to Nigeria. This paper, therefore, seeks to address this gap to consider an appropriate screening methodology to be employed in Nigeria, drawing from the experience of other jurisdictions. Methods: This study employs a triangulation of both quantitative and qualitative methods to analyze the need for Shari’ah screening of companies in Nigeria. The qualitative method is used by way of ijtihad, and this study tries to apply some Islamic Principles of Maqasid al-shari’ah as well as Qawaid al-Fiqiyyah to analyze activities of companies in order to ensure that they are indeed Shari’ah compliant. In addition, using the quantitative data gathered from the interview survey, the perspective of the investors with regards to the need for Shari’ah screening of companies in Nigeria is further analyzed. Results: The result of the study shows that there is a lack of awareness from the teeming Muslim population in Nigeria on the need for Shari’ah screening of companies in Nigeria. The result further shows that there is the need to take into cognizance the peculiar nature of company activities in Nigeria before any particular Shari’ah screening methodology is adopted and setting the necessary benchmarks. Conclusion and Implications: The study concludes that there is the need to ensure that the conscious Muslims in Nigeria screen companies for Shari’ah compliance so that they can easily identify the companies to invest in. The paper, therefore, recommends that the Nigerian government need to come up with a screening methodology that will suit the peculiar nature of companies in Nigeria. The study thus has a direct implication on the Investment regulatory bodies in Nigeria such as the Securities and Exchange Commission (SEC), Central Bank of Nigeria (CBN) as well as the investor Muslims.Keywords: Shari'ah screening, Muslims, investors, companies
Procedia PDF Downloads 167248 A Review Study on the Importance and Correlation of Crisis Literacy and Media Communications for Vulnerable Marginalized People During Crisis
Authors: Maryam Jabeen
Abstract:
In recent times, there has been a notable surge in attention towards diverse literacy concepts such as media literacy, information literacy, and digital literacy. These concepts have garnered escalating interest, spurring the emergence of novel approaches, particularly in the aftermath of the Covid-19 crisis. However, amidst discussions of crises, the domain of crisis literacy remains largely uncharted within academic exploration. Crisis literacy, also referred to as disaster literacy, denotes an individual's aptitude to not only comprehend but also effectively apply information, enabling well-informed decision-making and adherence to instructions about disaster mitigation, preparedness, response, and recovery. This theoretical and descriptive study seeks to transcend foundational literacy concepts, underscoring the urgency for an in-depth exploration of crisis literacy and its interplay with the realm of media communication. Given the profound impact of the pandemic experience and the looming uncertainty of potential future crises, there arises a pressing need to elevate crisis literacy, or disaster literacy, towards heightened autonomy and active involvement within the spheres of critical disaster preparedness, recovery initiatives, and media communication domains. This research paper is part of my ongoing Ph.D. research study, which explores on a broader level the Encoding and decoding of media communications in relation to crisis literacy. The primary objective of this research paper is to expound upon a descriptive, theoretical research endeavor delving into this domain. The emphasis lies in highlighting the paramount significance of media communications in literacy of crisis, coupled with an accentuated focus on its role in providing information to marginalized populations amidst crises. In conclusion, this research bridges the gap in crisis literacy correlation to media communications exploration, advocating for a comprehensive understanding of its dynamics and its symbiotic relationship with media communications. It intends to foster a heightened sense of crisis literacy, particularly within marginalized communities, catalyzing proactive participation in disaster preparedness, recovery processes, and adept media interactions.Keywords: covid-19, crisis literacy, crisis, marginalized, media and communications, pandemic, vulnerable people
Procedia PDF Downloads 62247 Implementation of Quality Function Development to Incorporate Customer’s Value in the Conceptual Design Stage of a Construction Projects
Authors: Ayedh Alqahtani
Abstract:
Many construction firms in Saudi Arabia dedicated to building projects agree that the most important factor in the real estate market is the value that they can give to their customer. These firms understand the value of their client in different ways. Value can be defined as the size of the building project in relationship to the cost or the design quality of the materials utilized in finish work or any other features of building rooms such as the bathroom. Value can also be understood as something suitable for the money the client is investing for the new property. A quality tool is required to support companies to achieve a solution for the building project and to understand and manage the customer’s needs. Quality Function Development (QFD) method will be able to play this role since the main difference between QFD and other conventional quality management tools is QFD a valuable and very flexible tool for design and taking into the account the VOC. Currently, organizations and agencies are seeking suitable models able to deal better with uncertainty, and that is flexible and easy to use. The primary aim of this research project is to incorporate customer’s requirements in the conceptual design of construction projects. Towards this goal, QFD is selected due to its capability to integrate the design requirements to meet the customer’s needs. To develop QFD, this research focused upon the contribution of the different (significantly weighted) input factors that represent the main variables influencing QFD and subsequent analysis of the techniques used to measure them. First of all, this research will review the literature to determine the current practice of QFD in construction projects. Then, the researcher will review the literature to define the current customers of residential projects and gather information on customers’ requirements for the design of the residential building. After that, qualitative survey research will be conducted to rank customer’s needs and provide the views of stakeholder practitioners about how these needs can affect their satisfy. Moreover, a qualitative focus group with the members of the design team will be conducted to determine the improvements level and technical details for the design of residential buildings. Finally, the QFD will be developed to establish the degree of significance of the design’s solution.Keywords: quality function development, construction projects, Saudi Arabia, quality tools
Procedia PDF Downloads 124246 The Basin Management Methodology for Integrated Water Resources Management and Development
Authors: Julio Jesus Salazar, Max Jesus De Lama
Abstract:
The challenges of water management are aggravated by global change, which implies high complexity and associated uncertainty; water management is difficult because water networks cross domains (natural, societal, and political), scales (space, time, jurisdictional, institutional, knowledge, etc.) and levels (area: patches to global; knowledge: a specific case to generalized principles). In this context, we need to apply natural and non-natural measures to manage water and soil. The Basin Management Methodology considers multifunctional measures of natural water retention and erosion control and soil formation to protect water resources and address the challenges related to the recovery or conservation of the ecosystem, as well as natural characteristics of water bodies, to improve the quantitative status of water bodies and reduce vulnerability to floods and droughts. This method of water management focuses on the positive impacts of the chemical and ecological status of water bodies, restoration of the functioning of the ecosystem and its natural services; thus, contributing to both adaptation and mitigation of climate change. This methodology was applied in 7 interventions in the sub-basin of the Shullcas River in Huancayo-Junín-Peru, obtaining great benefits in the framework of the participation of alliances of actors and integrated planning scenarios. To implement the methodology in the sub-basin of the Shullcas River, a process called Climate Smart Territories (CST) was used; with which the variables were characterized in a highly complex space. The diagnosis was then worked using risk management and adaptation to climate change. Finally, it was concluded with the selection of alternatives and projects of this type. Therefore, the CST approach and process face the challenges of climate change through integrated, systematic, interdisciplinary and collective responses at different scales that fit the needs of ecosystems and their services that are vital to human well-being. This methodology is now replicated at the level of the Mantaro river basin, improving with other initiatives that lead to the model of a resilient basin.Keywords: climate-smart territories, climate change, ecosystem services, natural measures, Climate Smart Territories (CST) approach
Procedia PDF Downloads 151245 An Analysis of the Role of Watchdog Civil Society Organisations in the Public Governance in Southern Africa: A study of South Africa and Zimbabwe
Authors: Julieth Gudo
Abstract:
The prevalence of corruption in African countries and persisting unsatisfactory distribution by governments of state resources among the citizens are clear indicators of a festering problem. Civil society organisations (CSOs) in Southern African countries, as citizen representatives, have been involved in challenging the ongoing corruption and poor governance in the public sector that have caused tensions between citizens and their governments. In doing so, civil society organisations demand accountability, transparency, and citizen participation in public governance. The problem is that CSOs’ role in challenging governments is not clearly defined in both law and literature. This uncertainty has resulted in an unsatisfying operating and legal environment for CSOs and a strained relationship between themselves and the governments. This paper examines civil society organisations' role in advancing good public governance in South Africa and Zimbabwe. The study will be conducted by means of a literature review and case studies. The state of public governance in Southern Africa will be discussed. The historical role of CSOs in the region of Southern Africa will be explored, followed by their role in public governance in contemporary South Africa and Zimbabwe. The relationship between state and civil society organisations will be examined. Furthermore, the legal frameworks that regulate and authoriseCSOs in their part in challenging poor governance in the public sector will be identified and discussed. Loopholes in such provisions will be identified, and measures that CSOs use to hold those responsible for poor governance accountable for their actions will be discussed, consequently closing the existing gap on the undefined role of CSOs in public governance in Southern Africa. The research demonstrates the need for an enabling operating environment through better cooperation, communication, and the relationship between governments and CSOs, the speedy and effective amendment of existing laws, and the introduction of legal provisions that give express authority to CSOs to challenge poor governance on the part of Southern African governments. Also critical is the enforcement of laws so that those responsible for poor governance and corruption in government are held accountable.Keywords: civil society organisations, public governance, southern Africa, South Africa, zimbabwe
Procedia PDF Downloads 117244 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 310243 Standard Essential Patents for Artificial Intelligence Hardware and the Implications For Intellectual Property Rights
Authors: Wendy de Gomez
Abstract:
Standardization is a critical element in the ability of a society to reduce uncertainty, subjectivity, misrepresentation, and interpretation while simultaneously contributing to innovation. Technological standardization is critical to codify specific operationalization through legal instruments that provide rules of development, expectation, and use. In the current emerging technology landscape Artificial Intelligence (AI) hardware as a general use technology has seen incredible growth as evidenced from AI technology patents between 2012 and 2018 in the United States Patent Trademark Office (USPTO) AI dataset. However, as outlined in the 2023 United States Government National Standards Strategy for Critical and Emerging Technology the codification through standardization of emerging technologies such as AI has not kept pace with its actual technological proliferation. This gap has the potential to cause significant divergent possibilities for the downstream outcomes of AI in both the short and long term. This original empirical research provides an overview of the standardization efforts around AI in different geographies and provides a background to standardization law. It quantifies the longitudinal trend of Artificial Intelligence hardware patents through the USPTO AI dataset. It seeks evidence of existing Standard Essential Patents from these AI hardware patents through a text analysis of the Statement of patent history and the Field of the invention of these patents in Patent Vector and examines their determination as a Standard Essential Patent and their inclusion in existing AI technology standards across the four main AI standards bodies- European Telecommunications Standards Institute (ETSI); International Telecommunication Union (ITU)/ Telecommunication Standardization Sector (-T); Institute of Electrical and Electronics Engineers (IEEE); and the International Organization for Standardization (ISO). Once the analysis is complete the paper will discuss both the theoretical and operational implications of F/Rand Licensing Agreements for the owners of these Standard Essential Patents in the United States Court and Administrative system. It will conclude with an evaluation of how Standard Setting Organizations (SSOs) can work with SEP owners more effectively through various forms of Intellectual Property mechanisms such as patent pools.Keywords: patents, artifical intelligence, standards, F/Rand agreements
Procedia PDF Downloads 88242 What Happens When We Try to Bridge the Science-Practice Gap? An Example from the Brazilian Native Vegetation Protection Law
Authors: Alice Brites, Gerd Sparovek, Jean Paul Metzger, Ricardo Rodrigues
Abstract:
The segregation between science and policy in decision making process hinders nature conservation efforts worldwide. Scientists have been criticized for not producing information that leads to effective solutions for environmental problems. In an attempt to bridge this gap between science and practice, we conducted a project aimed at supporting the implementation of the Brazilian Native Vegetation Protection Law (NVPL) implementation in São Paulo State (SP), Brazil. To do so, we conducted multiple open meetings with the stakeholders involved in this discussion. Throughout this process, we raised stakeholders' demands for scientific information and brought feedbacks about our findings. However, our main scientific advice was not taken into account during the NVPL implementation in SP. The NVPL has a mechanism that exempts landholders who converted native vegetation without offending the legislation in place at the time of the conversion from restoration requirements. We found out that there were no accurate spatialized data for native vegetation cover before the 1960s. Thus, the initial benchmark for the mechanism application should be the 1965 Brazilian Forest Act. Even so, SP kept the 1934 Brazilian Forest Act as the initial legal benchmark for the law application. This decision implies the use of a probabilistic native vegetation map that has uncertainty and subjectivity as its intrinsic characteristics, thus its use can lead to legal queries, corruption, and an unfair benefit application. But why this decision was made even after the scientific advice was vastly divulgated? We raised some possible reasons to explain it. First, the decision was made during a government transition, showing that circumstantial political events can overshadow scientific arguments. Second, the debate about the NVPL in SP was not pacified and powerful stakeholders could benefit from the confusion created by this decision. Finally, the native vegetation protection mechanism is a complex issue, with many technical aspects that can be hard to understand for a non-specialized courtroom, such as the one that made the final decision at SP. This example shows that science and decision-makers still have a long way ahead to improve their way to interact and that science needs to find its way to be heard above the political buzz.Keywords: Brazil, forest act, science-based dialogue, science-policy interface
Procedia PDF Downloads 122241 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control
Procedia PDF Downloads 162240 Family Medicine Residents in End-of-Life Care
Authors: Goldie Lynn Diaz, Ma. Teresa Tricia G. Bautista, Elisabeth Engeljakob, Mary Glaze Rosal
Abstract:
Introduction: Residents are expected to convey unfavorable news, discuss prognoses, and relieve suffering, and address do-not-resuscitate orders, yet some report a lack of competence in providing this type of care. Recognizing this need, Family Medicine residency programs are incorporating end-of-life care from symptom and pain control, counseling, and humanistic qualities as core proficiencies in training. Objective: This study determined the competency of Family Medicine Residents from various institutions in Metro Manila on rendering care for the dying. Materials and Methods: Trainees completed a Palliative Care Evaluation tool to assess their degree of confidence in patient and family interactions, patient management, and attitudes towards hospice care. Results: Remarkably, only a small fraction of participants were confident in performing independent management of terminal delirium and dyspnea. Fewer than 30% of residents can do the following without supervision: discuss medication effects and patient wishes after death, coping with pain, vomiting and constipation, and reacting to limited patient decision-making capacity. Half of the respondents had confidence in supporting the patient or family member when they become upset. Majority expressed confidence in many end-of-life care skills if supervision, coaching and consultation will be provided. Most trainees believed that pain medication should be given as needed to terminally ill patients. There was also uncertainty as to the most appropriate person to make end-of-life decisions. These attitudes may be influenced by personal beliefs rooted in cultural upbringing as well as by personal experiences with death in the family, which may also affect their participation and confidence in caring for the dying. Conclusion: Enhancing the quality and quantity of end-of-life care experiences during residency with sufficient supervision and role modeling may lead to knowledge and skill improvement to ensure quality of care. Fostering bedside learning opportunities during residency is an appropriate venue for teaching interventions in end-of-life care education.Keywords: end of life care, geriatrics, palliative care, residency training skill
Procedia PDF Downloads 257239 Dynamic Network Approach to Air Traffic Management
Authors: Catia S. A. Sima, K. Bousson
Abstract:
Congestion in the Terminal Maneuvering Areas (TMAs) of larger airports impacts all aspects of air traffic flow, not only at national level but may also induce arrival delays at international level. Hence, there is a need to monitor appropriately the air traffic flow in TMAs so that efficient decisions may be taken to manage their occupancy rates. It would be desirable to physically increase the existing airspace to accommodate all existing demands, but this question is entirely utopian and, given this possibility, several studies and analyses have been developed over the past decades to meet the challenges that have arisen due to the dizzying expansion of the aeronautical industry. The main objective of the present paper is to propose concepts to manage and reduce the degree of uncertainty in the air traffic operations, maximizing the interest of all involved, ensuring a balance between demand and supply, and developing and/or adapting resources that enable a rapid and effective adaptation of measures to the current context and the consequent changes perceived in the aeronautical industry. A central task is to emphasize the increase in air traffic flow management capacity to the present day, taking into account not only a wide range of methodologies but also equipment and/or tools already available in the aeronautical industry. The efficient use of these resources is crucial as the human capacity for work is limited and the actors involved in all processes related to air traffic flow management are increasingly overloaded and, as a result, operational safety could be compromised. The methodology used to answer and/or develop the issues listed above is based on the advantages promoted by the application of Markov Chain principles that enable the construction of a simplified model of a dynamic network that describes the air traffic flow behavior anticipating their changes and eventual measures that could better address the impact of increased demand. Through this model, the proposed concepts are shown to have potentials to optimize the air traffic flow management combined with the operation of the existing resources at each moment and the circumstances found in each TMA, using historical data from the air traffic operations and specificities found in the aeronautical industry, namely in the Portuguese context.Keywords: air traffic flow, terminal maneuvering area, TMA, air traffic management, ATM, Markov chains
Procedia PDF Downloads 133238 Fuzzy Decision Making to the Construction Project Management: Glass Facade Selection
Authors: Katarina Rogulj, Ivana Racetin, Jelena Kilic
Abstract:
In this study, the fuzzy logic approach (FLA) was developed for construction project management (CPM) under uncertainty and duality. The focus was on decision making in selecting the type of the glass facade for a residential-commercial building in the main design. The adoption of fuzzy sets was capable of reflecting construction managers’ reliability level over subjective judgments, and thus the robustness of the system can be achieved. An α-cuts method was utilized for discretizing the fuzzy sets in FLA. This method can communicate all uncertain information in the optimization process, taking into account the values of this information. Furthermore, FLA provides in-depth analyses of diverse policy scenarios that are related to various levels of economic aspects when it comes to the construction projects' valid decision making. The developed approach is applied to CPM to demonstrate its applicability. Analyzing the materials of glass facades, variants were defined. The development of the FLA for the CPM included relevant construction projec'ts stakeholders that were involved in the criteria definition to evaluate each variant. Using fuzzy Decision-Making Trial and Evaluation Laboratory Method (DEMATEL) comparison of the glass facade was conducted. This way, a rank, according to the priorities for inclusion into the main design, of variants is obtained. The concept was tested on a residential-commercial building in the city of Rijeka, Croatia. The newly developed methodology was then compared with the existing one. The aim of the research was to define an approach that will improve current judgments and decisions when it comes to the material selection of buildings facade as one of the most important architectural and engineering tasks in the main design. The advantage of the new methodology compared to the old one is that it includes the subjective side of the managers’ decisions, as an inevitable factor in each decision making. The proposed approach can help construction projects managers to identify the desired type of glass facade according to their preference and practical conditions, as well as facilitate in-depth analyses of tradeoffs between economic efficiency and architectural design.Keywords: construction projects management, DEMATEL, fuzzy logic approach, glass façade selection
Procedia PDF Downloads 137237 Application of Thermal Dimensioning Tools to Consider Different Strategies for the Disposal of High-Heat-Generating Waste
Authors: David Holton, Michelle Dickinson, Giovanni Carta
Abstract:
The principle of geological disposal is to isolate higher-activity radioactive wastes deep inside a suitable rock formation to ensure that no harmful quantities of radioactivity reach the surface environment. To achieve this, wastes will be placed in an engineered underground containment facility – the geological disposal facility (GDF) – which will be designed so that natural and man-made barriers work together to minimise the escape of radioactivity. Internationally, various multi-barrier concepts have been developed for the disposal of higher-activity radioactive wastes. High-heat-generating wastes (HLW, spent fuel and Pu) provide a number of different technical challenges to those associated with the disposal of low-heat-generating waste. Thermal management of the disposal system must be taken into consideration in GDF design; temperature constraints might apply to the wasteform, container, buffer and host rock. Of these, the temperature limit placed on the buffer component of the engineered barrier system (EBS) can be the most constraining factor. The heat must therefore be managed such that the properties of the buffer are not compromised to the extent that it cannot deliver the required level of safety. The maximum temperature of a buffer surrounding a container at the centre of a fixed array of heat-generating sources, arises due to heat diffusing from neighbouring heat-generating wastes, incrementally contributing to the temperature of the EBS. A range of strategies can be employed for managing heat in a GDF, including the spatial arrangements or patterns of those containers; different geometrical configurations can influence the overall thermal density in a disposal facility (or area within a facility) and therefore the maximum buffer temperature. A semi-analytical thermal dimensioning tool and methodology have been applied at a generic stage to explore a range of strategies to manage the disposal of high-heat-generating waste. A number of examples, including different geometrical layouts and chequer-boarding, have been illustrated to demonstrate how these tools can be used to consider safety margins and inform strategic disposal options when faced with uncertainty, at a generic stage of the development of a GDF.Keywords: buffer, geological disposal facility, high-heat-generating waste, spent fuel
Procedia PDF Downloads 285236 Covid-19 Associated Stress and Coping Strategies
Authors: Bar Shapira-Youngster, Sima Amram-Vaknin, Yuliya Lipshits-Braziler
Abstract:
The study examined how 811 Israelis experienced and coped with the COVID-19 lockdown. Stress, uncertainty, and loss of control were reported as common emotional experiences. Two main difficulties were reported: Loneliness and health and emotional concerns. Frequent explanations for the virus's emergence were: scientific or faith reasoning. The most prevalent coping strategies were distraction activities and acceptance. Reducing the use of maladaptive coping strategies has important implications for mental health outcomes. Objectives: COVID-19 has been recognized as a collective, continuous traumatic stressor. The present study examined how individuals experienced, perceived, and coped with this traumatic event during the lockdown in Israel in April 2020. Method: 811 Israelis (71.3% were women; mean age 43.7, SD=13.3)completed an online semi-structured questionnaire consisting two sections: In the first section, participants were asked to report background information. In the second section, they were asked to answer 8 open-ended questions about their experience, perception, and coping with the covid-19 lockdown. Participation was voluntary, and anonymity was assured, they were not offered compensation of any kind. The data were subjected to qualitative content analysis that seeks to classify the participants` answers into an effective number of categories that represent similar meanings. Our content analysis of participants’ answers extended far beyond simple word counts; our objective was to try to identify recurrent categories that characterized participants’ responses to each question. We sought to ensure that the categories regarding the different questions are as mutually exclusive and exhaustive as possible. To ensure robust analysis, the data were initially analyzed by the first author, and a second opinion was then sought from research colleagues. Contribution: The present research expands our knowledge of individuals' experiences, perceptions, and coping mechanisms with continuous traumatic events. Reducing the use of maladaptive coping strategies has important implications for mental health outcomes.Keywords: Covid-19, emotional distress, coping, continuous traumatic event
Procedia PDF Downloads 130235 Identifying Factors of Wellbeing in Russian Orphans
Authors: Alexandra Telitsyna, Galina Semya, Elvira Garifulina
Abstract:
Introduction: Starting from 2012 Russia conducts deinstitutionalization policy and now the main indicator of success is the number of children living in institutions. Active family placement process has resulted in residents of the institution now mainly consists of adolescents with behavioral and emotional problems, children with disabilities and groups of siblings. Purpose of science research: The purpose of science research is to identify factors for child’s wellbeing while temporary stay in an orphanage and the subjective assessment of children's level of well-being (psychological well-being). Methods: The data used for this project was collected by the questionnaire of 72 indicators, a tool for monitoring the behavior of children and caregivers, an additional questionnaire for children; well-being assessment questionnaire containing 10 scales for three age groups from preschool to older adolescents. In 2016-2018, the research was conducted in 1873 institution in 85 regions of Russia. In each region a team of academics, specialists from Non-profits, independent experts was created. Training was conducted for team members through a series of webinars prior to undertaking the assessment. The results: To ensure the well-being of the children, the following conditions are necessary: 1- Life of children in institution is organised according to the principles of family care (including the creation of conditions for attachment to be formed); 2- Contribution to find family-based placement for children (including reintegration into the primary family); 3- Work with parents of children, who are placed in an organization at the request of parents; 4- Children attend schools according to their needs; 5- Training of staff and volunteers; 6- Special environment and services for children with special needs and children with disabilities; 7- Cooperation with NGOs; 8 - Openness and accessibility of the organization. Conclusion: A study of the psychological well-being of children showed that the most emotionally stressful for children were questions about the presence and frequency of contact with relatives, and the level of well-being is higher in the presence of a trusted adult and respect for rights. The greatest contribution to the trouble is made by the time the child is in the orphanage, the lack of contact with parents and relatives, the uncertainty of the future.Keywords: identifying factors, orphans, Russia, wellbeing
Procedia PDF Downloads 129234 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 94233 Climate Related Financial Risk on Automobile Industry and the Impact to the Financial Institutions
Authors: Mahalakshmi Vivekanandan S.
Abstract:
As per the recent changes happening in the global policies, climate-related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate-related changes can often happen and lead to risk and a lot of uncertainty, but needs to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed so that the financial institutions can plan to mitigate it. Climate-related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and other risk types. And the models required to compute this has to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out the suggestion that the climate-related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, the author presents a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios and how the different transition risks affect the risk associated with the different parties. This research paper delves into the topic of the increase in the concentration of greenhouse gases that in turn cause global warming. It then considers the various scenarios of having the different risk drivers impacting the Credit and market risk of an institution by understanding the transmission channels and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: the automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II Capital calculations and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.Keywords: capital calculation, climate risk, credit risk, pillar ii risk, scenario modeling
Procedia PDF Downloads 140232 Tonal Pitch Structure as a Tool of Social Consolidation
Authors: Piotr Podlipniak
Abstract:
Social consolidation has often been indicated as an adaptive function of music which led to the evolution of music faculty. According to many scholars this function is possible thanks to musical rhythm that enables sensorimotor synchronization to a musical beat. The ability to synchronize to music allows performing music collectively which enhances social cohesion. However, the collective performance of music consists also in spectral synchronization that depends on musical pitch structure. Similarly to rhythmic synchronization, spectral synchronization is a result of ‘brain states alignment’ between people who collectively listen to or perform music. In order to successfully synchronize pitches performers have to adequately expect the pitch structure. The most common form of music which predominates among all human societies is tonal music. In fact tonality understood in the broadest sense as such an organization of musical pitches in which some pitch is more important than others is the only kind of musical pitch structure that has been observed in all currently known musical cultures. The perception of such a musical pitch structure elicits specific emotional reactions which are often described as tensions and relaxations. These facts provoke some important questions. What is the evolutionary reason that people use pitch structure as a form of vocal communication? Why different pitch structures elicit different emotional states independent of extra-musical context? It is proposed in the current presentation that in the course of evolution pitch structure became a human specific tool of communication the function of which is to induce emotional states such as uncertainty and cohesion. By the means of eliciting these emotions during collective music performance people are able to unconsciously give cues concerning social acceptance. This is probably one of the reasons why in all cultures people collectively perform tonal music. It is also suggested that tonal pitch structure had been invented socially before it became an evolutionary innovation of Homo sapiens. It means that a predisposition to tonally organize pitches evolved by the means of ‘Baldwin effect’ – a process in which natural selection transforms the learned response of an organism into the instinctive response. The hypothetical evolutionary scenario of the emergence of tonal pitch structure will be proposed. In this scenario social forces such as a need for closer cooperation play the crucial role.Keywords: emotion, evolution, tonality, social consolidation
Procedia PDF Downloads 324231 Comprehensive Multilevel Practical Condition Monitoring Guidelines for Power Cables in Industries: Case Study of Mobarakeh Steel Company in Iran
Authors: S. Mani, M. Kafil, E. Asadi
Abstract:
Condition Monitoring (CM) of electrical equipment has gained remarkable importance during the recent years; due to huge production losses, substantial imposed costs and increases in vulnerability, risk and uncertainty levels. Power cables feed numerous electrical equipment such as transformers, motors, and electric furnaces; thus their condition assessment is of a very great importance. This paper investigates electrical, structural and environmental failure sources, all of which influence cables' performances and limit their uptimes; and provides a comprehensive framework entailing practical CM guidelines for maintenance of cables in industries. The multilevel CM framework presented in this study covers performance indicative features of power cables; with a focus on both online and offline diagnosis and test scenarios, and covers short-term and long-term threats to the operation and longevity of power cables. The study, after concisely overviewing the concept of CM, thoroughly investigates five major areas of power quality, Insulation Quality features of partial discharges, tan delta and voltage withstand capabilities, together with sheath faults, shield currents and environmental features of temperature and humidity; and elaborates interconnections and mutual impacts between those areas; using mathematical formulation and practical guidelines. Detection, location, and severity identification methods for every threat or fault source are also elaborated. Finally, the comprehensive, practical guidelines presented in the study are presented for the specific case of Electric Arc Furnace (EAF) feeder MV power cables in Mobarakeh Steel Company (MSC), the largest steel company in MENA region, in Iran. Specific technical and industrial characteristics and limitations of a harsh industrial environment like MSC EAF feeder cable tunnels are imposed on the presented framework; making the suggested package more practical and tangible.Keywords: condition monitoring, diagnostics, insulation, maintenance, partial discharge, power cables, power quality
Procedia PDF Downloads 228230 Towards an Equitable Proprietary Regime: Property Rights Over Human Genes as a Case Study
Authors: Aileen Editha
Abstract:
The legal recognition of property rights over human genes is a divisive topic to which there is no resolution. As a frequently discussed topic, scholars and practitioners often highlight the inadequacies of a proprietary regime. However, little has been said in regard to the nature of human genetic materials (HGMs). This paper proposes approaching the issue of property over HGMs from an alternative perspective that looks at the personal and social value and valuation of HGMs. This paper will highlight how the unique and unresolved status of HGMs is incompatible with the main tenets of property and, consequently, contributes to legal ambiguity and uncertainty in the regulation of property rights over human genes. HGMs are perceived as part of nature and a free-for-all while also being within an individual’s private sphere. Additionally, it is also considered to occupy a unique “not-private-nor-public” status. This limbo-like position clashes with property’s fundamental characteristic that relies heavily on a clear public/private dichotomy. Moreover, as property is intrinsically linked to the legal recognition of one’s personhood, this irresolution benefits some while disadvantages others. In particular, it demands the publicization of once-private genes for the “common good” but subsequently encourages privatization (through labor) of these now-public genes. This results in the gain of some (already privileged) individuals while enabling the disenfranchisement of members of minority groups, such as Indigenous communities. This paper will discuss real and intellectual property rights over human genes, such as the right to income or patent rights, in Canada and the US. This paper advocates for a sui generis approach to governing rights and interests over human genes that would not rely on having a strict public/private dichotomy. Not only would this improve legal certainty and clarity, but it would also alleviate—or, at the very least, minimize—the role that the current law plays in further entrenching existing systemic inequalities. Despite the specificity of this topic, this paper argues that there are broader lessons to be learned. This issue is an insightful case study on the interconnection of various principles in law, society, and property, and what must be done when discordance between one or more of those principles has detrimental societal outcomes. Ultimately, it must be remembered that property is an adaptable and malleable instrument that can be developed to ensure it contributes to equity and flourishing.Keywords: property rights, human genetic materials, critical legal scholarship, systemic inequalities
Procedia PDF Downloads 80