Search results for: average information ratio
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18465

Search results for: average information ratio

6435 Research on Urban Thermal Environment Climate Map Based on GIS: Taking Shapingba District, Chongqing as an Example

Authors: Zhao Haoyue

Abstract:

Due to the combined effects of climate change, urban expansion, and population growth, various environmental issues, such as urban heat islands and pollution, arise. Therefore, reliable information on urban environmental climate is needed to address and mitigate the negative effects. The emergence of urban climate maps provides a practical basis for urban climate regulation and improvement. This article takes Shapingba District, Chongqing City, as an example to study the construction method of urban thermal environment climate maps based on GIS spatial analysis technology. The thermal load, ventilation potential analysis map, and thermal environment comprehensive analysis map were obtained. Based on the classification criteria obtained from the climate map, corresponding protection and planning mitigation measures have been proposed.

Keywords: urban climate, GIS, heat island analysis, urban thermal environment

Procedia PDF Downloads 90
6434 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 102
6433 Marriage, Foundation of Family Strength and the Best Opportunity for Human Existence and Relationships

Authors: Tamriko Pavliashvili

Abstract:

Marriage is such an important institution of family law, which is an indicator of the development of society. Although a family can be created by the birth of a child between an unmarried couple, marriage is still the main basis for the creation of a family, during which the rights and duties imposed require legal regulation. At present, in the conditions of globalization, there are different types of marriage, although in the main countries, it is still a union of a woman and a man, which involves voluntary cohabitation and assuming and fulfilling the norms and responsibilities established on the basis of the law. Modern society is at the stage where there is a need to create a family, and therefore marriage provides the best opportunity for relationships and existence between people. The mentioned paper about the state institution - marriage gives us the opportunity to get more information about the existing habits, legal norms from the ancient times to the modern period in Georgia, and also through comparison we will see what the differences and commonalities were and are in the marriage law of the countries of the world and Georgia.

Keywords: marriage, family law, the union of man and woman, church law

Procedia PDF Downloads 57
6432 Drug Delivery Cationic Nano-Containers Based on Pseudo-Proteins

Authors: Sophio Kobauri, Temur Kantaria, Nina Kulikova, David Tugushi, Ramaz Katsarava

Abstract:

The elaboration of effective drug delivery vehicles is still topical nowadays since targeted drug delivery is one of the most important challenges of the modern nanomedicine. The last decade has witnessed enormous research focused on synthetic cationic polymers (CPs) due to their flexible properties, in particular as non-viral gene delivery systems, facile synthesis, robustness, not oncogenic and proven gene delivery efficiency. However, the toxicity is still an obstacle to the application in pharmacotherapy. For overcoming the problem, creation of new cationic compounds including the polymeric nano-size particles – nano-containers (NCs) loading with different pharmaceuticals and biologicals is still relevant. In this regard, a variety of NCs-based drug delivery systems have been developed. We have found that amino acid-based biodegradable polymers called as pseudo-proteins (PPs), which can be cleared from the body after the fulfillment of their function are highly suitable for designing pharmaceutical NCs. Among them, one of the most promising are NCs made of biodegradable Cationic PPs (CPPs). For preparing new cationic NCs (CNCs), we used CPPs composed of positively charged amino acid L-arginine (R). The CNCs were fabricated by two approaches using: (1) R-based homo-CPPs; (2) Blends of R-based CPPs with regular (neutral) PPs. According to the first approach NCs we prepared from CPPs 8R3 (composed of R, sebacic acid and 1,3-propanediol) and 8R6 (composed of R, sebacic acid and 1,6-hexanediol). The NCs prepared from these CPPs were 72-101 nm in size with zeta potential within +30 ÷ +35 mV at a concentration 6 mg/mL. According to the second approach, CPPs 8R6 was blended in organic phase with neutral PPs 8L6 (composed of leucine, sebacic acid and 1,6-hexanediol). The NCs prepared from the blends were 130-140 nm in size with zeta potential within +20 ÷ +28 mV depending on 8R6/8L6 ratio. The stability studies of fabricated NCs showed that no substantial change of the particle size and distribution and no big particles’ formation is observed after three months storage. In vitro biocompatibility study of the obtained NPs with four different stable cell lines: A549 (human), U-937 (human), RAW264.7 (murine), Hepa 1-6 (murine) showed both type cathionic NCs are biocompatible. The obtained data allow concluding that the obtained CNCs are promising for the application as biodegradable drug delivery vehicles. This work was supported by the joint grant from the Science and Technology Center in Ukraine and Shota Rustaveli National Science Foundation of Georgia #6298 'New biodegradable cationic polymers composed of arginine and spermine-versatile biomaterials for various biomedical applications'.

Keywords: biodegradable polymers, cationic pseudo-proteins, nano-containers, drug delivery vehicles

Procedia PDF Downloads 144
6431 Towards an Environmental Knowledge System in Water Management

Authors: Mareike Dornhoefer, Madjid Fathi

Abstract:

Water supply and water quality are key problems of mankind at the moment and - due to increasing population - in the future. Management disciplines like water, environment and quality management therefore need to closely interact, to establish a high level of water quality and to guarantee water supply in all parts of the world. Groundwater remediation is one aspect in this process. From a knowledge management perspective it is only possible to solve complex ecological or environmental problems if different factors, expert knowledge of various stakeholders and formal regulations regarding water, waste or chemical management are interconnected in form of a knowledge base. In general knowledge management focuses the processes of gathering and representing existing and new knowledge in a way, which allows for inference or deduction of knowledge for e.g. a situation where a problem solution or decision support are required. A knowledge base is no sole data repository, but a key element in a knowledge based system, thus providing or allowing for inference mechanisms to deduct further knowledge from existing facts. In consequence this knowledge provides decision support. The given paper introduces an environmental knowledge system in water management. The proposed environmental knowledge system is part of a research concept called Green Knowledge Management. It applies semantic technologies or concepts such as ontology or linked open data to interconnect different data and information sources about environmental aspects, in this case, water quality, as well as background material enriching an established knowledge base. Examples for the aforementioned ecological or environmental factors threatening water quality are among others industrial pollution (e.g. leakage of chemicals), environmental changes (e.g. rise in temperature) or floods, where all kinds of waste are merged and transferred into natural water environments. Water quality is usually determined with the help of measuring different indicators (e.g. chemical or biological), which are gathered with the help of laboratory testing, continuous monitoring equipment or other measuring processes. During all of these processes data are gathered and stored in different databases. Meanwhile the knowledge base needs to be established through interconnecting data of these different data sources and enriching its semantics. Experts may add their knowledge or experiences of previous incidents or influencing factors. In consequence querying or inference mechanisms are applied for the deduction of coherence between indicators, predictive developments or environmental threats. Relevant processes or steps of action may be modeled in form of a rule based approach. Overall the environmental knowledge system supports the interconnection of information and adding semantics to create environmental knowledge about water environment, supply chain as well as quality. The proposed concept itself is a holistic approach, which links to associated disciplines like environmental and quality management. Quality indicators and quality management steps need to be considered e.g. for the process and inference layers of the environmental knowledge system, thus integrating the aforementioned management disciplines in one water management application.

Keywords: water quality, environmental knowledge system, green knowledge management, semantic technologies, quality management

Procedia PDF Downloads 210
6430 Psychometric Validation of Czech Version of Spiritual Needs Assessment for Patients: The First Part of Research

Authors: Lucie Mrackova, Helena Kisvetrova

Abstract:

Spirituality is an integral part of human life. In a secular environment, spiritual needs are often overlooked, especially in acute nursing care. Spiritual needs assessment for patients (SNAP), which also exists in the Czech version (SNAP-CZ), can be used for objective evaluation. The aim of this study was to measure the psychometric properties of SNAP-CZ and to find correlations between SNAP-CZ and sociodemographic and clinical variables. A cross-sectional study with tools assessing spiritual needs (SNAP-CZ), anxiety (Beck Anxiety Inventory; BAI), depression (Beck Depression Inventory; BDI), pain (Visual Analogue Scale; VAS), self-sufficiency (Barthel Index; BI); cognitive function (Montreal Cognitive Test; MoCa) and selected socio-demographic data was performed. The psychometric properties of SNAP-CZ were tested using factor analysis, reliability and validity tests, and correlations between the questionnaire and sociodemographic data and clinical variables. Internal consistency was established with Cronbach’s alfa for the overall score, respective domains, and individual items. Reliability was assessed by test-retest by Interclass correlation coefficient (ICC). Data for correlation analysis were processed according to Pearson's correlation coefficient. The study included 172 trauma patients (the mean age = 40.6 ± 12.1 years) who experienced polytrauma or severe monotrauma. There were a total of 106 (61.6%) male subjects, 140 (81.4%) respondents identified themselves as non-believers. The full-scale Cronbach's alpha was 0.907. The test-retest showed the reliability of the individual domains in the range of 0.924 to 0.960 ICC. Factor analysis resulted in a three-factor solution (psychosocial needs (alfa = 0.788), spiritual needs (alfa = 0.886) and religious needs (alfa = 0.841)). Correlation analysis using Pearson's correlation coefficient showed that the domain of psychosocial needs significantly correlated only with gender (r = 0.178, p = 0.020). Males had a statistically significant lower average value in this domain (mean = 12.5) compared to females (mean = 13.8). The domain of spiritual needs significantly correlated with gender (r = 0.199, p = 0.009), social status (r = 0.156, p = 0.043), faith (r = -0.250, p = 0.001), anxiety (r = 0.194, p = 0.011) and depression (r = 0.155, p = 0.044). The domain of religious needs significantly correlated with age (r = 0,208, p = 0,007), education (r = -0,161, p = 0,035), faith (r = -0,575, p < 0,0001) and depression (r = 0,179, p = 0,019). Overall, the whole SNAP scale significantly correlated with gender (r = 0.219, p = 0.004), social status (r = 0.175, p = 0.023), faith (r = -0.334, p <0.0001), anxiety (r = 0.177, p = 0.022) and depression (r = 0.173, p = 0.025). The results of this study corroborate the reliability of the SNAP-CZ and support its future use in the nursing care of trauma patients in a secular society. Acknowledgment: The study was supported by grant nr. IGA_FZV_2020_003.

Keywords: acute nursing care, assessment of spiritual needs, patient, psychometric validation, spirituality

Procedia PDF Downloads 91
6429 Investigating Learners’ Online Learning Experiences in a Blended-Learning School Environment

Authors: Abraham Ampong

Abstract:

BACKGROUND AND SIGNIFICANCE OF THE STUDY: The development of information technology and its influence today is inevitable in the world of education. The development of information technology and communication (ICT) has an impact on the use of teaching aids such as computers and the Internet, for example, E-learning. E-learning is a learning process attained through electronic means. But learning is not merely technology because learning is essentially more about the process of interaction between teacher, student, and source study. The main purpose of the study is to investigate learners’ online learning experiences in a blended learning approach, evaluate how learners’ experience of an online learning environment affects the blended learning approach and examine the future of online learning in a blended learning environment. Blended learning pedagogies have been recognized as a path to improve teacher’s instructional strategies for teaching using technology. Blended learning is perceived to have many advantages for teachers and students, including any-time learning, anywhere access, self-paced learning, inquiry-led learning and collaborative learning; this helps institutions to create desired instructional skills such as critical thinking in the process of learning. Blended learning as an approach to learning has gained momentum because of its widespread integration into educational organizations. METHODOLOGY: Based on the research objectives and questions of the study, the study will make use of the qualitative research approach. The rationale behind the selection of this research approach is that participants are able to make sense of their situations and appreciate their construction of knowledge and understanding because the methods focus on how people understand and interpret their experiences. A case study research design is adopted to explore the situation under investigation. The target population for the study will consist of selected students from selected universities. A simple random sampling technique will be used to select the targeted population. The data collection instrument that will be adopted for this study will be questions that will serve as an interview guide. An interview guide is a set of questions that an interviewer asks when interviewing respondents. Responses from the in-depth interview will be transcribed into word and analyzed under themes. Ethical issues to be catered for in this study include the right to privacy, voluntary participation, and no harm to participants, and confidentiality. INDICATORS OF THE MAJOR FINDINGS: It is suitable for the study to find out that online learning encourages timely feedback from teachers or that online learning tools are okay to use without issues. Most of the communication with the teacher can be done through emails and text messages. It is again suitable for sampled respondents to prefer online learning because there are few or no distractions. Learners can have access to technology to do other activities to support their learning”. There are, again, enough and enhanced learning materials available online. CONCLUSION: Unlike the previous research works focusing on the strengths and weaknesses of blended learning, the present study aims at the respective roles of its two modalities, as well as their interdependencies.

Keywords: online learning, blended learning, technologies, teaching methods

Procedia PDF Downloads 77
6428 Cell Adhesion, Morphology and Cytokine Expression of Synoviocytes Can Be Altered on Different Nano-Topographic Oxidized Silicon Nanosponges

Authors: Hung-Chih Hsu, Pey-Jium Chang, Ching-Hsein Chen, Jer-Liang Andrew Yeh

Abstract:

Osteoarthritis (OA) is a common disorder in rehabilitation clinic. The main characteristics include joint pain, localized tenderness and enlargement, joint effusion, cartilage destruction, loss of adhesion of perichondrium, synovium hyperplasia. Synoviocytes inflammation might be a cause of local tenderness and effusion. Inflammation cytokines might also play an important role in joint pain, cartilage destruction, decrease adhesion of perichondrium to the bone. Treatments of osteoarthritis include non-steroid anti-inflammation drugs (NSAID), glucosamine supplementation, hyaluronic acid, arthroscopic debridement, and total joint replacement. Total joint replacement is commonly used in patients with severe OA who failed respond to pharmacological treatment. However, some patients received surgery had serious adverse events, including instability of the implants due to insufficient adhesion to the adjacent bony tissue or synovial inflammation. We tried to develop ideal nano-topographic oxidized silicon nanosponges by using with various chemicals to produce thickness difference in nanometers in order to study more about the cell-environment interactions in vitro like the alterations of cell adhesion, morphology, extracellular matrix secretions in the pathogenesis of osteoarthritis. Cytokines studies like growth factor, reactive oxygen species, reactive inflammatory materials (Like nitrous oxide and prostaglandin E2), extracellular matrix (ECM) degradation enzymes, and synthesis of collagen will also be observed and discussed. Extracellular and intracellular expression transforming growth factor beta (TGF-β) will be studied by reverse transcription-polymerase chain reaction (RT-PCR). The degradation of ECM will be observed by the bioactivity ratio of matrix metalloproteinase (MMP) and tissue inhibitors of metalloproteinase by ELISA (Enzyme-linked immunosorbent assay). When rabbit synoviocytes were cultured on these nano-topographic structures, they demonstrate better cell adhesion rate, decreased expression of MMP-2,9 and PGE2, and increased expression of TGF-β when cultured in nano-topographic oxidized silicon nanosponges than in the planar oxidized silicon ones. These results show cell behavior, cytokine production can be influenced by physical characteristics from different nano-topographic structures. Our study demonstrates the possibility of manipulating cell behavior in these nano-topographic biomaterials.

Keywords: osteoarthritis, synoviocyte, oxidized silicon surfaces, reactive oxygen species

Procedia PDF Downloads 375
6427 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria

Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale

Abstract:

The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.

Keywords: allocative efficiency, DEA, Tobit regression, tuber crop

Procedia PDF Downloads 274
6426 Impact of Normative Institutional Factors on Sustainability Reporting

Authors: Lina Dagilienė

Abstract:

The article explores the impact of normative institutional factors on the development of sustainability reporting. The vast majority of research in the scientific literature focuses on mandatory institutional factors, i.e. how public institutions and market regulators affect sustainability reporting. Meanwhile, there is lack of empirical data for the impact of normative institutional factors. The effect of normative factors in this paper is based on the role of non-governmental organizations (NGO) and institutional theory. The case of Global Compact Local Network in the developing country was examined. The research results revealed that in the absence of regulated factors, companies were not active with regard to social disclosures; they presented non-systemized social information of a descriptive nature. Only 10% of sustainability reports were prepared using the GRI methodology. None of the reports were assured by third parties.

Keywords: institutional theory, normative, sustainability reporting, Global Compact Local Network

Procedia PDF Downloads 370
6425 Creating Growth and Reducing Inequality in Developing Countries

Authors: Rob Waddle

Abstract:

We study an economy with weak justice and security systems and with weak public policy and regulation or little capacity to implement them, and with high barriers to profitable sectors. We look at growth and development opportunities based on the derived demand. We show that there is hope for such an economy to grow up and to generate a win-win situation for all stakeholders if the derived demand is supplied. We then investigate conditions that could stimulate the derived demand supply. We show that little knowledge of public, private and international expenditures in the economy and academic tools are enough to trigger the derived demand supply. Our model can serve as guidance to donor and NGO working in developing countries, and show to media the best way to help is to share information about existing and accessible opportunities. It can also provide direction to vocational schools and universities that could focus more on providing tools to seize existing opportunities.

Keywords: growth, development, monopoly, oligopoly, inequality

Procedia PDF Downloads 325
6424 Effect of Incentives on Knowledge Sharing and Learning: Evidence from the Indian IT Sector

Authors: Asish O. Mathew, Lewlyn L. R. Rodrigues

Abstract:

The organizations in the knowledge economy era have recognized the importance of building knowledge assets for sustainable growth and development. In comparison to other industries, Information Technology (IT) enterprises, holds an edge in developing an effective Knowledge Management (KM) program, thanks to their in-house technological abilities. This paper tries to study the various knowledge-based incentive programs and its effect on Knowledge Sharing and Learning in the context of the Indian IT sector. A conceptual model is developed linking KM incentives, knowledge sharing, and learning. A questionnaire study is conducted to collect primary data from the knowledge workers of the IT organizations located in India. The data was analysed using Structural Equation Modeling using Partial Least Square method. The results show a strong influence of knowledge management incentives on knowledge sharing and an indirect influence on learning.

Keywords: knowledge management, knowledge management incentives, knowledge sharing, learning

Procedia PDF Downloads 461
6423 Variations in Breast Aesthetic Reconstruction Rates between Asian and Caucasian Patients Post Mastectomy in a UK Tertiary Breast Referral Centre: A Five-Year Institutional Review

Authors: Wisam Ismail, Chole Wright, Elizabeth Baker, Cathy Tait, Mohamed Salhab, Richard Linforth

Abstract:

Background: Post-mastectomy breast reconstruction is an important treatment option for women with breast cancer with psychosocial, emotional and quality of life benefits. Despite this, Asian patients are one-fifth as likely as Caucasian patients to undergo reconstruction after mastectomy. Aim: This study aimed to assess the difference in breast reconstruction rates between Asian and Caucasian patients treated at Bradford Teaching Hospitals between May 2011 – December 2015.The long-term goal is to equip healthcare professionals to improve breast cancer treatment outcome by increasing breast reconstruction rates in this sub-population. Methods: All patients undergoing mastectomy were identified using a prospectively collected departmental database. Further data was obtained via retrospective electronic case note review. Bradford city population is about 530.000 by the end of 2015, with 67.44% of the city's population was White ethnic groups and 26.83% Asian Ethnic Groups (UK population consensus). The majority of Asian population speaks Urdu, hence an Urdu speaking breast care nurse was appointed to facilitate communications and deliver a better understanding of the reconstruction options and pathways. Statistical analysis was undertaken using the SAS program. Patients were stratified by age, self-reported ethnicity, axillary surgery and reconstruction. Relative odds were calculated using univariate and multivariate logistic regression analyses with adjustment for known confounders. An Urdu speaking breast care nurse was employed throughout this period to facilitate communication and patient decision making. Results: 506 patients underwent Mastectomy over 5 years. 72 (14%) Asian v. 434 (85%) Caucasian. Overall median age is 64 years (SD1.1). Asian median age is 62 (SD0.9), versus Caucasian 65 (SD1.2). Total axillary clearance rate was 30% (42% Asian v.30% Caucasian). Overall reconstruction rate was 126 patients (28.9%).Only 6 of 72 Asian patients (<1%) underwent breast reconstruction versus 121of 434 Caucasian (28%) (p < 0.04), Odds ratio 0.68, (95% confidence interval 0.57-0.79). Conclusions: There is a significant difference in post-mastectomy reconstruction rates between Asian and Caucasian patients. This difference is likely to be multi-factorial. Higher rates of axillary clearance in Asian patients might suggest later disease presentation and/or higher rates of subsequent adjuvant therapy, both of which, can impact on the suitability of breast reconstruction. Strategies aimed at reducing racial disparities in breast reconstruction should include symptom awareness to enable earlier presentation and facilitated communication to ensure informed decision-making.

Keywords: aesthetic, Asian, breast, reconstruction

Procedia PDF Downloads 267
6422 Immediate Geometric Solution of Irregular Quadrilaterals: A Digital Tool Applied to Topography

Authors: Miguel Mariano Rivera Galvan

Abstract:

The purpose of this research was to create a digital tool by which users can obtain an immediate and accurate solution of the angular characteristics of an irregular quadrilateral. The development of this project arose because of the frequent absence of a polygon’s geometric information in land ownership accreditation documents. The researcher created a mathematical model using a linear approximation iterative method, employing various disciplines and techniques including trigonometry, geometry, algebra, and topography. This mathematical model uses as input data the surface of the quadrilateral, as well as the length of its sides, to obtain its interior angles and make possible its representation in a coordinate system. The results are as accurate and reliable as the user requires, offering the possibility of using this tool as a support to develop future engineering and architecture projects quickly and reliably.

Keywords: digital tool, geometry, mathematical model, quadrilateral, solution

Procedia PDF Downloads 135
6421 Tracing a Timber Breakthrough: A Qualitative Study of the Introduction of Cross-Laminated-Timber to the Student Housing Market in Norway

Authors: Marius Nygaard, Ona Flindall

Abstract:

The Palisaden student housing project was completed in August 2013 and was, with its eight floors, Norway’s tallest timber building at the time of completion. It was the first time cross-laminated-timber (CLT) was utilized at this scale in Norway. The project was the result of a concerted effort by a newly formed management company to establish CLT as a sustainable and financially competitive alternative to conventional steel and concrete systems. The introduction of CLT onto the student housing market proved so successful that by 2017 more than 4000 individual student residences will have been built using the same model of development and construction. The aim of this paper is to identify the key factors that enabled this breakthrough for CLT. It is based on an in-depth study of a series of housing projects and the role of the management company who both instigated and enabled this shift of CLT from the margin to the mainstream. Specifically, it will look at how a new building system was integrated into a marketing strategy that identified a market potential within the existing structure of the construction industry and within the economic restrictions inherent to student housing in Norway. It will show how a key player established a project model that changed both the patterns of cooperation and the information basis for decisions. Based on qualitative semi-structured interviews with managers, contractors and the interdisciplinary teams of consultants (architects, structural engineers, acoustical experts etc.) this paper will trace the introduction, expansion and evolution of CLT-based building systems in the student housing market. It will show how the project management firm’s position in the value chain enabled them to function both as a liaison between contractor and client, and between contractor and producer. A position that allowed them to improve the flow of information. This ensured that CLT was handled on equal terms to other structural solutions in the project specifications, enabling realistic pricing and risk evaluation. Secondly, this paper will describe and discuss how the project management firm established and interacted with a growing network of contractors, architects and engineers to pool expertise and broaden the knowledge base across Norway’s regional markets. Finally, it will examine the role of the client, the building typology, and the industrial and technological factors in achieving this breakthrough for CLT in the construction industry. This paper gives an in-depth view of the progression of a single case rather than a broad description of the state of the art of large-scale timber building in Norway. However, this type of study may offer insights that are important to the understanding not only of specific markets but also of how new technologies should be introduced in big and well-established industries.

Keywords: cross-laminated-timber (CLT), industry breakthrough, student housing, timber market

Procedia PDF Downloads 208
6420 Thermal Method Production of the Hydroxyapatite from Bone By-Products from Meat Industry

Authors: Agnieszka Sobczak-Kupiec, Dagmara Malina, Klaudia Pluta, Wioletta Florkiewicz, Bozena Tyliszczak

Abstract:

Introduction: Request for compound of phosphorus grows continuously, thus, it is searched for alternative sources of this element. One of these sources could be by-products from meat industry which contain prominent quantity of phosphorus compounds. Hydroxyapatite, which is natural component of animal and human bones, is leading material applied in bone surgery and also in stomatology. This is material, which is biocompatible, bioactive and osteoinductive. Methodology: Hydroxyapatite preparation: As a raw material was applied deproteinized and defatted bone pulp called bone sludge, which was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Hydroxyapatite was received in calcining process in chamber kiln with electric heating in air atmosphere in two stages. In the first stage, material was calcining in temperature 600°C within 3 hours. In the next stage unified material was calcining in three different temperatures (750°C, 850°C and 950°C) keeping material in maximum temperature within 3.0 hours. Bone sludge: Bone sludge was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Pork bones coming from the partition of meat were used as a raw material for the production of the protein hydrolysate. After disintegration, a mixture of bone pulp and water with a small amount of lactic acid was boiled at temperature 130-135°C and under pressure4 bar. After 3-3.5 hours boiled-out bones were separated on a sieve, and the solution of protein-fat hydrolysate got into a decanter, where bone sludge was separated from it. Results of the study: The phase composition was analyzed by roentgenographic method. Hydroxyapatite was the only crystalline phase observed in all the calcining products. XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Conclusion: The researches were shown that phosphorus content is around 12%, whereas, calcium content amounts to 28% on average. The conducted researches on bone-waste calcining at the temperatures of 750-950°C confirmed that thermal utilization of deproteinized bone-waste was possible. X-ray investigations were confirmed that hydroxyapatite is the main component of calcining products, and also XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Contents of calcium and phosphorus were distinctly increased with calcining temperature, whereas contents of phosphorus soluble in acids were decreased. It could be connected with higher crystallization degree of material received in higher temperatures and its stable structure. Acknowledgements: “The authors would like to thank the The National Centre for Research and Development (Grant no: LIDER//037/481/L-5/13/NCBR/2014) for providing financial support to this project”.

Keywords: bone by-products, bone sludge, calcination, hydroxyapatite

Procedia PDF Downloads 275
6419 Elaboration and Characterization of CdxZn1-XS Thin Films Deposed by Chemical Bath Deposition

Authors: Zellagui Rahima, Chaumont Denis, Boughelout Abderrahman, Adnane Mohamed

Abstract:

Thin films of CdxZn1-xS were deposed by chemical bath deposition on glass substrates for photovoltaic applications. The thin films CdZnS were synthesized by chemical bath (CBD) with different deposition protocols for optimized the parameter of deposition as the temperature, time of deposition, concentrations of ion and pH. Surface morphology, optical and chemical composition properties of thin film CdZnS were investigated by SEM, EDAX, spectrophotometer. The transmittance is 80% in visible region 300 nm – 1000 nm; it has been observed in that films the grain size is between 50nm and 100nm measured by SEM image and we also note that the shape of particle is changing with the change in concentration. This result favors of application these films in solar cells; the chemical analysis with EDAX gives information about the presence of Cd, Zn and S elements and investigates the stoichiometry.

Keywords: thin film, solar cells, transmition, cdzns

Procedia PDF Downloads 251
6418 The Effect of Feature Selection on Pattern Classification

Authors: Chih-Fong Tsai, Ya-Han Hu

Abstract:

The aim of feature selection (or dimensionality reduction) is to filter out unrepresentative features (or variables) making the classifier perform better than the one without feature selection. Since there are many well-known feature selection algorithms, and different classifiers based on different selection results may perform differently, very few studies consider examining the effect of performing different feature selection algorithms on the classification performances by different classifiers over different types of datasets. In this paper, two widely used algorithms, which are the genetic algorithm (GA) and information gain (IG), are used to perform feature selection. On the other hand, three well-known classifiers are constructed, which are the CART decision tree (DT), multi-layer perceptron (MLP) neural network, and support vector machine (SVM). Based on 14 different types of datasets, the experimental results show that in most cases IG is a better feature selection algorithm than GA. In addition, the combinations of IG with DT and IG with SVM perform best and second best for small and large scale datasets.

Keywords: data mining, feature selection, pattern classification, dimensionality reduction

Procedia PDF Downloads 656
6417 Analysis of Reflection of Elastic Waves in Three Dimensional Model Comprised with Viscoelastic Anisotropic Medium

Authors: Amares Chattopadhyay, Akanksha Srivastava

Abstract:

A unified approach to study the reflection of a plane wave in three-dimensional model comprised of the triclinic viscoelastic medium. The phase velocities of reflected qP, qSV and qSH wave have been calculated for the concerned medium by using the eigenvalue approach. The generalized method has been implemented to compute the complex form of amplitude ratios. Further, we discussed the nature of reflection coefficients of qP, qSV and qSH wave. The viscoelastic parameter, polar angle and azimuthal angle are found to be strongly influenced by amplitude ratios. The research article is particularly focused to study the effect of viscoelasticity associated with highly anisotropic media which exhibits the notable information about the reflection coefficients of qP, qSV, and qSH wave. The outcomes may further useful to the better exploration of all types of hydrocarbon reservoir and advancement in the field of reflection seismology.

Keywords: amplitude ratios, three dimensional, triclinic, viscoelastic

Procedia PDF Downloads 217
6416 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 108
6415 Uneven Development: Structural Changes and Income Outcomes across States in Malaysia

Authors: Siti Aiysyah Tumin

Abstract:

This paper looks at the nature of structural changes—the transition of employment from agriculture, to manufacturing, then to different types of services—in different states in Malaysia and links it to income outcomes for households and workers. Specifically, this paper investigates the conditional association between the concentration of different economic activities and income outcomes (household incomes and employee wages) in almost four decades. Using publicly available state-level employment and income data, we found that significant wage premium was associated with “modern” services (finance, real estate, professional, information and communication), which are urban-based services sectors that employ a larger proportion of skilled and educated workers. However, employment in manufacturing and other services subsectors was significantly associated with a lower income dispersion and inequality, alluding to their importance in welfare improvements.

Keywords: employment, labor market, structural change, wage

Procedia PDF Downloads 158
6414 Assessment of Rainfall Erosivity, Comparison among Methods: Case of Kakheti, Georgia

Authors: Mariam Tsitsagi, Ana Berdzenishvili

Abstract:

Rainfall intensity change is one of the main indicators of climate change. It has a great influence on agriculture as one of the main factors causing soil erosion. Splash and sheet erosion are one of the most prevalence and harmful for agriculture. It is invisible for an eye at first stage, but the process will gradually move to stream cutting erosion. Our study provides the assessment of rainfall erosivity potential with the use of modern research methods in Kakheti region. The region is the major provider of wheat and wine in the country. Kakheti is located in the eastern part of Georgia and characterized quite a variety of natural conditions. The climate is dry subtropical. For assessment of the exact rate of rainfall erosion potential several year data of rainfall with short intervals are needed. Unfortunately, from 250 active metro stations running during the Soviet period only 55 of them are active now and 5 stations in Kakheti region respectively. Since 1936 we had data on rainfall intensity in this region, and rainfall erosive potential is assessed, in some old papers, but since 1990 we have no data about this factor, which in turn is a necessary parameter for determining the rainfall erosivity potential. On the other hand, researchers and local communities suppose that rainfall intensity has been changing and the number of haily days has also been increasing. However, finding a method that will allow us to determine rainfall erosivity potential as accurate as possible in Kakheti region is very important. The study period was divided into three sections: 1936-1963; 1963-1990 and 1990-2015. Rainfall erosivity potential was determined by the scientific literature and old meteorological stations’ data for the first two periods. And it is known that in eastern Georgia, at the boundary between steppe and forest zones, rainfall erosivity in 1963-1990 was 20-75% higher than that in 1936-1963. As for the third period (1990-2015), for which we do not have data of rainfall intensity. There are a variety of studies, where alternative ways of calculating the rainfall erosivity potential based on lack of data are discussed e.g.based on daily rainfall data, average annual rainfall data and the elevation of the area, etc. It should be noted that these methods give us a totally different results in case of different climatic conditions and sometimes huge errors in some cases. Three of the most common methods were selected for our research. Each of them was tested for the first two sections of the study period. According to the outcomes more suitable method for regional climatic conditions was selected, and after that, we determined rainfall erosivity potential for the third section of our study period with use of the most successful method. Outcome data like attribute tables and graphs was specially linked to the database of Kakheti, and appropriate thematic maps were created. The results allowed us to analyze the rainfall erosivity potential changes from 1936 to the present and make the future prospect. We have successfully implemented a method which can also be use for some another region of Georgia.

Keywords: erosivity potential, Georgia, GIS, Kakheti, rainfall

Procedia PDF Downloads 211
6413 The Relationship of Television Viewers with Brand Awareness and Brand Loyalty: A Case Study of Bangkok, Thailand

Authors: Natnicha Hasoontree

Abstract:

The purposes of this research was to study the relationship of television viewers with brand awareness and brand loyalty from the perspective of customers in Bangkok. A probability random sampling of 482 television viewers was utilized. A Likert-five-scale questionnaire was designed to collect the data and small in-depth interviews were also used to obtain their opinions. The findings revealed that the majority of respondents reported a positive relationship between time of viewing television and brand awareness and brand loyalty. The more they watched the advertisement of a particular brand, the more positive the information was perceived and thereby increasing brand loyalty. Finally, the findings from the in-depth interviews with small group of television producers revealed that they are convinced that advertising exposure had a positive impact on brand awareness and brand loyalty.

Keywords: brand awareness, brand loyalty, television viewers, advertisement

Procedia PDF Downloads 297
6412 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 44
6411 Relationship between Different Heart Rate Control Levels and Risk of Heart Failure Rehospitalization in Patients with Persistent Atrial Fibrillation: A Retrospective Cohort Study

Authors: Yongrong Liu, Xin Tang

Abstract:

Background: Persistent atrial fibrillation is a common arrhythmia closely related to heart failure. Heart rate control is an essential strategy for treating persistent atrial fibrillation. Still, the understanding of the relationship between different heart rate control levels and the risk of heart failure rehospitalization is limited. Objective: The objective of the study is to determine the relationship between different levels of heart rate control in patients with persistent atrial fibrillation and the risk of readmission for heart failure. Methods: We conducted a retrospective dual-centre cohort study, collecting data from patients with persistent atrial fibrillation who received outpatient treatment at two tertiary hospitals in central and western China from March 2019 to March 2020. The collected data included age, gender, body mass index (BMI), medical history, and hospitalization frequency due to heart failure. Patients were divided into three groups based on their heart rate control levels: Group I with a resting heart rate of less than 80 beats per minute, Group II with a resting heart rate between 80 and 100 beats per minute, and Group III with a resting heart rate greater than 100 beats per minute. The readmission rates due to heart failure within one year after discharge were statistically analyzed using propensity score matching in a 1:1 ratio. Differences in readmission rates among the different groups were compared using one-way ANOVA. The impact of varying levels of heart rate control on the risk of readmission for heart failure was assessed using the Cox proportional hazards model. Binary logistic regression analysis was employed to control for potential confounding factors. Results: We enrolled a total of 1136 patients with persistent atrial fibrillation. The results of the one-way ANOVA showed that there were differences in readmission rates among groups exposed to different levels of heart rate control. The readmission rates due to heart failure for each group were as follows: Group I (n=432): 31 (7.17%); Group II (n=387): 11.11%; Group III (n=317): 90 (28.50%) (F=54.3, P<0.001). After performing 1:1 propensity score matching for the different groups, 223 pairs were obtained. Analysis using the Cox proportional hazards model showed that compared to Group I, the risk of readmission for Group II was 1.372 (95% CI: 1.125-1.682, P<0.001), and for Group III was 2.053 (95% CI: 1.006-5.437, P<0.001). Furthermore, binary logistic regression analysis, including variables such as digoxin, hypertension, smoking, coronary heart disease, and chronic obstructive pulmonary disease as independent variables, revealed that coronary heart disease and COPD also had a significant impact on readmission due to heart failure (p<0.001). Conclusion: The correlation between the heart rate control level of patients with persistent atrial fibrillation and the risk of heart failure rehospitalization is positive. Reasonable heart rate control may significantly reduce the risk of heart failure rehospitalization.

Keywords: heart rate control levels, heart failure rehospitalization, persistent atrial fibrillation, retrospective cohort study

Procedia PDF Downloads 59
6410 Assessing the Values and Destruction Degree of Archaeological Sites in Taiwan

Authors: Yung-Chung Chuang

Abstract:

Current situation and accumulated development of archaeological sites have very high impacts on the preservation value of the site. This research set 3 archaeological sites in Taiwan as study areas. Assessment of the degree of destruction of cultural layers due to land use change and geomorphological change were conducted with aerial photographs (1976-1978; 2016-2017) and digital aerial survey technology on 2D and 3D geographic information system platforms. The results showed that the archaeological sites were all seriously influenced due to the high land use intensity between 1976-2017. Geomorphological changes caused by human cultivation and engineering construction were main causes of site destruction, especially in private lands. Therefore, urban planning methods for land acquisition or land regulation are necessary.

Keywords: archaeological sites, accumulated development, destruction of cultural layers, geomorphological changes

Procedia PDF Downloads 198
6409 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 108
6408 Assesment of Financial Performance: An Empirical Study of Crude Oil and Natural Gas Companies in India

Authors: Palash Bandyopadhyay

Abstract:

Background and significance of the study: Crude oil and natural gas is of crucial importance due to its increasing demand in India. The demand has been increased because of change of lifestyle overtime. Since India has poor utilization of oil production capacity, constantly the import of it has been increased progressively day by day. This ultimately hit the foreign exchange reserves of India, however it negatively affect the Indian economy as well. The financial performance of crude oil and natural gas companies in India has been trimmed down year after year because of underutilization of production capacity, enhancement of demand, change in life style, and change in import bill and outflows of foreign currencies. In this background, the current study seeks to measure the financial performance of crude oil and natural gas companies of India in the post liberalization period. Keeping in view of this, this study assesses the financial performance in terms of liquidity management, solvency, efficiency, financial stability, and profitability of the companies under study. Methodology: This research work is encircled on yearly ratio data collected from Centre for Monitoring Indian Economy (CMIE) Prowess database for the periods between 1993-94 and 2012-13 with 20 observations using liquidity, solvency and efficiency indicators, profitability indicators and financial stability indicators of all the major crude oil and natural gas companies in India. In the course of analysis, descriptive statistics, correlation statistics, and linear regression test have been utilized. Major findings: Descriptive statistics indicate that liquidity position is satisfactory in case of three crude oil and natural gas companies (Oil and Natural Gas Companies Videsh Limited, Oil India Limited and Selan exploration and transportation Limited) out of selected companies under study but solvency position is satisfactory only for one company (Oil and Natural Gas Companies Videsh Limited). However, efficiency analysis points out that Oil and Natural Gas Companies Videsh Limited performs effectively the management of inventory, receivables, and payables, but the overall liquidity management is not well. Profitability position is very much satisfactory in case of all the companies except Tata Petrodyne Limited, but profitability management is not satisfactory for all the companies under study. Financial stability analysis shows that all the companies are more dependent on debt capital, which bears a financial risk. Correlation and regression test results illustrates that profitability is positively and negatively associated with liquidity, solvency, efficiency, and financial stability indicators. Concluding statement: Management of liquidity and profitability of crude oil and natural gas companies in India should have been improved through controlling unnecessary imports in spite of the heavy demand of crude oil and natural gas in India and proper utilization of domestic oil reserves. At the same time, Indian government has to concern about rupee depreciation and interest rates.

Keywords: financial performance, crude oil and natural gas companies, India, linear regression

Procedia PDF Downloads 310
6407 Evaluation of Parameters of Subject Models and Their Mutual Effects

Authors: A. G. Kovalenko, Y. N. Amirgaliyev, A. U. Kalizhanova, L. S. Balgabayeva, A. H. Kozbakova, Z. S. Aitkulov

Abstract:

It is known that statistical information on operation of the compound multisite system is often far from the description of actual state of the system and does not allow drawing any conclusions about the correctness of its operation. For example, from the world practice of operation of systems of water supply, water disposal, it is known that total measurements at consumers and at suppliers differ between 40-60%. It is connected with mathematical measure of inaccuracy as well as ineffective running of corresponding systems. Analysis of widely-distributed systems is more difficult, in which subjects, which are self-maintained in decision-making, carry out economic interaction in production, act of purchase and sale, resale and consumption. This work analyzed mathematical models of sellers, consumers, arbitragers and the models of their interaction in the provision of dispersed single-product market of perfect competition. On the basis of these models, the methods, allowing estimation of every subject’s operating options and systems as a whole are given.

Keywords: dispersed systems, models, hydraulic network, algorithms

Procedia PDF Downloads 276
6406 Reinforcement Learning the Born Rule from Photon Detection

Authors: Rodrigo S. Piera, Jailson Sales Ara´ujo, Gabriela B. Lemos, Matthew B. Weiss, John B. DeBrota, Gabriel H. Aguilar, Jacques L. Pienaar

Abstract:

The Born rule was historically viewed as an independent axiom of quantum mechanics until Gleason derived it in 1957 by assuming the Hilbert space structure of quantum measurements [1]. In subsequent decades there have been diverse proposals to derive the Born rule starting from even more basic assumptions [2]. In this work, we demonstrate that a simple reinforcement-learning algorithm, having no pre-programmed assumptions about quantum theory, will nevertheless converge to a behaviour pattern that accords with the Born rule, when tasked with predicting the output of a quantum optical implementation of a symmetric informationally-complete measurement (SIC). Our findings support a hypothesis due to QBism (the subjective Bayesian approach to quantum theory), which states that the Born rule can be thought of as a normative rule for making decisions in a quantum world [3].

Keywords: quantum Bayesianism, quantum theory, quantum information, quantum measurement

Procedia PDF Downloads 89