Search results for: aggregate variable profitability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2966

Search results for: aggregate variable profitability

806 Self-Organizing Maps for Credit Card Fraud Detection

Authors: ChunYi Peng, Wei Hsuan CHeng, Shyh Kuang Ueng

Abstract:

This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.

Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies

Procedia PDF Downloads 49
805 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis

Authors: Kimberly Samaha

Abstract:

In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.

Keywords: bio-economy, investment risk, circular design, economic modelling

Procedia PDF Downloads 99
804 Histological Changes of Mice Lungs After Daily Exposure to Different Concentration of Incense Smoke

Authors: Samar Omar A. Rabah, Sahar Ragab El Hadad, Fatmah Albani

Abstract:

Since the discovery of Agarwood (Incense tree), many studies reported its characteristic effects and variable benefits, as either to produce Arabian Incense or as a traditional medicine against many diseases. Laboratory experiments were carried out on the effect of different concentrations of Incense smoke inhalation on the lung weight and tissue in female mice. This research derives its importance from the fact that Incense is heavily used in Saudi Arabia in the absence of thorough studies of its effects on health. Eighty animals are used in this study, and they are divided into four groups, each is 20 animals. Three groups are exposed to different concentrations (2, 4 and 6 gm) of Incense smoke daily for three months, and the fourth group is the control. At the end of each month, five animals from each group were dissected. Obtained data showed an increase but not significant in animal body and lung weight, this results return to natural increase as a result of normal growth of animals. Light microscope reveals some changes in the lung tissue, such as focal emphysema, rupture in the alveolar walls, hemorrhage, congestion, edema and few peri-bronchial lymphoid cells. After continuous exposure to Incense smoke focal necrosis and degradation are observed in some cells of epithelial bronchioles. Also, fibrosis of peri-bronchial, thickening in alveolar walls and aggregation of lymphoid cells are demonstrated in some lungs sections. according to the above manifestations it could be concluded that exposure to Incense smoke causes pulmonary harmful effects. Therefore, we can recommend that Incense smoke will be used only in open places to reduce its harms.

Keywords: incense smoke, lungs, histological changes of lungs, agarwood

Procedia PDF Downloads 489
803 Impact of Displacements Durations and Monetary Costs on the Labour Market within a City Consisting on Four Areas a Theoretical Approach

Authors: Aboulkacem El Mehdi

Abstract:

We develop a theoretical model at the crossroads of labour and urban economics, used for explaining the mechanism through which the duration of home-workplace trips and their monetary costs impact the labour demand and supply in a spatially scattered labour market and how they are impacted by a change in passenger transport infrastructures and services. The spatial disconnection between home and job opportunities is referred to as the spatial mismatch hypothesis (SMH). Its harmful impact on employment has been subject to numerous theoretical propositions. However, all the theoretical models proposed so far are patterned around the American context, which is particular as it is marked by racial discrimination against blacks in the housing and the labour markets. Therefore, it is only natural that most of these models are developed in order to reproduce a steady state characterized by agents carrying out their economic activities in a mono-centric city in which most unskilled jobs being created in the suburbs, far from the Blacks who dwell in the city-centre, generating a high unemployment rates for blacks, while the White population resides in the suburbs and has a low unemployment rate. Our model doesn't rely on any racial discrimination and doesn't aim at reproducing a steady state in which these stylized facts are replicated; it takes the main principle of the SMH -the spatial disconnection between homes and workplaces- as a starting point. One of the innovative aspects of the model consists in dealing with a SMH related issue at an aggregate level. We link the parameters of the passengers transport system to employment in the whole area of a city. We consider here a city that consists of four areas: two of them are residential areas with unemployed workers, the other two host firms looking for labour force. The workers compare the indirect utility of working in each area with the utility of unemployment and choose between submitting an application for the job that generate the highest indirect utility or not submitting. This arbitration takes account of the monetary and the time expenditures generated by the trips between the residency areas and the working areas. Each of these expenditures is clearly and explicitly formulated so that the impact of each of them can be studied separately than the impact of the other. The first findings show that the unemployed workers living in an area benefiting from good transport infrastructures and services have a better chance to prefer activity to unemployment and are more likely to supply a higher 'quantity' of labour than those who live in an area where the transport infrastructures and services are poorer. We also show that the firms located in the most accessible area receive much more applications and are more likely to hire the workers who provide the highest quantity of labour than the firms located in the less accessible area. Currently, we are working on the matching process between firms and job seekers and on how the equilibrium between the labour demand and supply occurs.

Keywords: labour market, passenger transport infrastructure, spatial mismatch hypothesis, urban economics

Procedia PDF Downloads 288
802 Theoretical Study on the Visible-Light-Induced Radical Coupling Reactions Mediated by Charge Transfer Complex

Authors: Lishuang Ma

Abstract:

Charge transfer (CT) complex, also known as Electron donor-acceptor (EDA) complex, has received attentions increasingly in the field of synthetic chemistry community, due to the CT complex can absorb the visible light through the intermolecular charge transfer excited states, various of catalyst-free photochemical transformations under mild visible-light conditions. However, a number of fundamental questions are still ambiguous, such as the origin of visible light absorption, the photochemical and photophysical properties of the CT complex, as well as the detailed mechanism of the radical coupling pathways mediated by CT complex. Since these are critical factors for target-specific design and synthesis of more new-type CT complexes. To this end, theoretical investigations were performed in our group to answer these questions based on multiconfigurational perturbation theory. The photo-induced fluoroalkylation reactions are mediated by CT complexes, which are formed by the association of an acceptor of perfluoroalkyl halides RF−X (X = Br, I) and a suitable donor molecule such as β-naphtholate anion, were chosen as a paradigm example in this work. First, spectrum simulations were carried out by both CASPT2//CASSCF/PCM and TD-DFT/PCM methods. The computational results showed that the broadening spectra in visible light range (360-550nm) of the CT complexes originate from the 1(σπ*) excitation, accompanied by an intermolecular electron transfer, which was also found closely related to the aggregate states of the donor and acceptor. Moreover, from charge translocation analysis, the CT complex that showed larger charge transfer in the round state would exhibit smaller charge transfer in excited stated of 1(σπ*), causing blue shift relatively. Then, the excited-state potential energy surface (PES) was calculated at CASPT2//CASSCF(12,10)/ PCM level of theory to explore the photophysical properties of the CT complexes. The photo-induced C-X (X=I, Br) bond cleavage was found to occur in the triplet state, which is accessible through a fast intersystem crossing (ISC) process that is controlled by the strong spin-orbit coupling resulting from the heavy iodine and bromine atoms. Importantly, this rapid fragmentation process can compete and suppress the backward electron transfer (BET) event, facilitating the subsequent effective photochemical transformations. Finally, the reaction pathways of the radical coupling were also inspected, which showed that the radical chain propagation pathway could easy to accomplish with a small energy barrier no more than 3.0 kcal/mol, which is the key factor that promote the efficiency of the photochemical reactions induced by CT complexes. In conclusion, theoretical investigations were performed to explore the photophysical and photochemical properties of the CT complexes, as well as the mechanism of radical coupling reactions mediated by CT complex. The computational results and findings in this work can provide some critical insights into mechanism-based design for more new-type EDA complexes

Keywords: charge transfer complex, electron transfer, multiconfigurational perturbation theory, radical coupling

Procedia PDF Downloads 139
801 Analysis Influence Variation Frequency on Characterization of Nano-Particles in Preteatment Bioetanol Oil Palm Stem (Elaeis guineensis JACQ) Use Sonication Method with Alkaline Peroxide Activators on Improvement of Celullose

Authors: Luristya Nur Mahfut, Nada Mawarda Rilek, Ameiga Cautsarina Putri, Mujaroh Khotimah

Abstract:

The use of bioetanol from lignocellulosic material has begone to be developed. In Indonesia the most abundant lignocellulosic material is stem of palm which contain 32.22% of cellulose. Indonesia produces approximatelly 300.375.000 tons of stem of palm each year. To produce bioetanol from lignocellulosic material, the first process is pretreatment. But, until now the method of lignocellulosic pretretament is uneffective. This is related to the particle size and the method of pretreatment of less than optimal so that led to an overhaul of the lignin insufficient, consequently increased levels of cellulose was not significant resulting in low yield of bioetanol. To solve the problem, this research was implemented by using the process of pretreatment method ultasonifikasi in order to produce higher pulp with nano-sized particles that will obtain higher of yield ethanol from stem of palm. Research methods used in this research is the RAK that is composed of one factor which is the frequency ultrasonic waves with three varians, they are 30 kHz, 40 kHz, 50 kHz, and use constant variable is concentration of NaOH. The analysis conducted in this research is the influence of the frequency of the wave to increase levels of cellulose and change size on the scale of nanometers on pretreatment process by using the PSA methods (Particle Size Analyzer), and a Cheason. For the analysis of the results, data, and best treatment using ANOVA and test BNT with confidence interval 5%. The best treatment was obtained by combination X3 (frequency of sonication 50 kHz) and lignin (19,6%) cellulose (59,49%) and hemicellulose (11,8%) with particle size 385,2nm (18,8%).

Keywords: bioethanol, pretreatment, stem of palm, cellulosa

Procedia PDF Downloads 324
800 Reactivity of Clay Minerals of the Hydrocarbon Reservoir Rocks and the Effect of Zeolites on Operation and Production Costs That the Oil Industry in the World Assumes

Authors: Carlos Alberto Ríos Reyes

Abstract:

Traditionally, clays have been considered as one of the main problems in the flow of fluids in hydrocarbon reservoirs. However, there is not known the significance of zeolites formed from the reactivity of clays and their effect not only on the costs of operations carried out by the oil industry in the world but also on production. The present work focused on understanding the interaction between clay minerals with brines and alkaline solutions used in the oil industry. For this, a comparative study was conducted where the reaction of sedimentary rocks under laboratory conditions was examined. Original and treated rocks were examined by X-ray powder diffraction (XRPD) and Scanning Electron Microscopy (SEM) to determine the changes that these rocks underwent upon contact with fluids of variable chemical composition. As a result, zeolite Linde Type A (LTA), sodalite (SOD), and cancrinite (CAN) can be formed after experimental work, which coincided with the dissolution of kaolinite and smectite. Results reveal that the Oil Industry should invest efforts and focus its gaze to understand at the pore scale the problem that could arise as a consequence of the clay-fluid interaction in hydrocarbon reservoir rocks due to the presence of clays in their porous system, as well as the formation of zeolites, which are better hydrocarbon absorbents. These issues could be generating losses in world production. We conclude that there is a critical situation that may be occurring in the stimulation of hydrocarbon reservoirs, where real solutions are necessary not only for the formulation of more efficient and effective injection fluids but also to contribute to the improvement of production and avoid considerable losses in operating costs.

Keywords: clay minerals, zeolites, rock-fluid interaction, experimental work, reactivity

Procedia PDF Downloads 77
799 An Estimating Equation for Survival Data with a Possibly Time-Varying Covariates under a Semiparametric Transformation Models

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

An estimating equation technique is an alternative method of the widely used maximum likelihood methods, which enables us to ease some complexity due to the complex characteristics of time-varying covariates. In the situations, when both the time-varying covariates and left-truncation are considered in the model, the maximum likelihood estimation procedures become much more burdensome and complex. To ease the complexity, in this study, the modified estimating equations those have been given high attention and considerations in many researchers under semiparametric transformation model was proposed. The purpose of this article was to develop the modified estimating equation under flexible and general class of semiparametric transformation models for left-truncated and right censored survival data with time-varying covariates. Besides the commonly applied Cox proportional hazards model, such kind of problems can be also analyzed with a general class of semiparametric transformation models to estimate the effect of treatment given possibly time-varying covariates on the survival time. The consistency and asymptotic properties of the estimators were intuitively derived via the expectation-maximization (EM) algorithm. The characteristics of the estimators in the finite sample performance for the proposed model were illustrated via simulation studies and Stanford heart transplant real data examples. To sum up the study, the bias for covariates has been adjusted by estimating density function for the truncation time variable. Then the effect of possibly time-varying covariates was evaluated in some special semiparametric transformation models.

Keywords: EM algorithm, estimating equation, semiparametric transformation models, time-to-event outcomes, time varying covariate

Procedia PDF Downloads 149
798 Evolution of Predator-prey Body-size Ratio: Spatial Dimensions of Foraging Space

Authors: Xin Chen

Abstract:

It has been widely observed that marine food webs have significantly larger predator–prey body-size ratios compared with their terrestrial counterparts. A number of hypotheses have been proposed to account for such difference on the basis of primary productivity, trophic structure, biophysics, bioenergetics, habitat features, energy efficiency, etc. In this study, an alternative explanation is suggested based on the difference in the spatial dimensions of foraging arenas: terrestrial animals primarily forage in two dimensional arenas, while marine animals mostly forage in three dimensional arenas. Using 2-dimensional and 3-dimensional random walk simulations, it is shown that marine predators with 3-dimensional foraging would normally have a greater foraging efficiency than terrestrial predators with 2-dimensional foraging. Marine prey with 3-dimensional dispersion usually has greater swarms or aggregations than terrestrial prey with 2-dimensional dispersion, which again favours a greater predator foraging efficiency in marine animals. As an analytical tool, a Lotka-Volterra based adaptive dynamical model is developed with the predator-prey ratio embedded as an adaptive variable. The model predicts that high predator foraging efficiency and high prey conversion rate will dynamically lead to the evolution of a greater predator-prey ratio. Therefore, marine food webs with 3-dimensional foraging space, which generally have higher predator foraging efficiency, will evolve a greater predator-prey ratio than terrestrial food webs.

Keywords: predator-prey, body size, lotka-volterra, random walk, foraging efficiency

Procedia PDF Downloads 72
797 Audit Committee Characteristics and Earnings Quality of Listed Food and Beverages Firms in Nigeria

Authors: Hussaini Bala

Abstract:

There are different opinions in the literature on the relationship between Audit Committee characteristics and earnings management. The mix of opinions makes the direction of their relationship ambiguous. This study investigated the relationship between Audit Committee characteristics and earnings management of listed food and beverages Firms in Nigeria. The study covered the period of six years from 2007 to 2012. Data for the study were extracted from the Firms’ annual reports and accounts. After running the OLS regression, a robustness test was conducted for the validity of statistical inferences. The dependent variable was generated using two steps regression in order to determine the discretionary accrual of the sample Firms. Multiple regression was employed to run the data of the study using Random Model. The results from the analysis revealed a significant association between audit committee characteristics and earnings management of the Firms. While audit committee size and committees’ financial expertise showed an inverse relationship with earnings management, committee’s independence, and frequency of meetings are positively and significantly related to earnings management. In line with the findings, the study recommended among others that listed food and beverages Firms in Nigeria should strictly comply with the provision of Companies and Allied Matters Act (CAMA) and SEC Code of Corporate Governance on the issues regarding Audit Committees. Regulators such as SEC should increase the minimum number of Audit Committee members with financial expertise and also have a statutory position on the maximum number of Audit Committees meetings, which should not be greater than four meetings in a year as SEC code of corporate governance is silent on this.

Keywords: audit committee, earnings management, listed Food and beverages size, leverage, Nigeria

Procedia PDF Downloads 262
796 Self-Organizing Maps for Credit Card Fraud Detection and Visualization

Authors: Peng Chun-Yi, Chen Wei-Hsuan, Ueng Shyh-Kuang

Abstract:

This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.

Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies

Procedia PDF Downloads 54
795 Agroforestry Systems and Practices and Its Adoption in Kilombero Cluster of Sagcot, Tanzania

Authors: Lazaro E. Nnko, Japhet J. Kashaigili, Gerald C. Monela, Pantaleo K. T. Munishi

Abstract:

Agroforestry systems and practices are perceived to improve livelihood and sustainable management of natural resources. However, their adoption in various regions differs with the biophysical conditions and societal characteristics. This study was conducted in Kilombero District to investigate the factors influencing the adoption of different agroforestry systems and practices in agro-ecosystems and farming systems. A household survey, key informant interviews, and focus group discussion was used for data collection in three villages. Descriptive statistics and multinomial logistic regression in SPSS were applied for analysis. Results show that Igima and Ngajengwa villages had home garden practices dominated, as revealed by 63.3% and 66.7%, respectively, while Mbingu village had mixed intercropping practice with 56.67%. Agrosilvopasture systems were dominant in Igima and Ngajengwa villages with 56.7% and 66.7%, respectively, while in Mbingu village, the dominant system was agrosilviculture with 66.7%. The results from multinomial logistic regression show that different explanatory variable was statistical significance as predictors of the adoption of agroforestry systems and practices. Residence type and sex were the most dominant factor influencing the adoption of agroforestry systems. Duration of stay in the village, availability of extension education, residence, and sex were the dominant factor influencing the adoption of agroforestry practices. The most important and statistically significant factors among these were residence type and sex. The study concludes that agroforestry will be more successful if the local priorities, which include social-economic need characteristics of the society, will be considered in designing systems and practices. The socio-economic need of the community should be addressed in the process of expanding the adoption of agroforestry systems and practices.

Keywords: agroforestry adoption, agroforestry systems, agroforestry practices, agroforestry, Kilombero

Procedia PDF Downloads 110
794 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.

Keywords: autopoiesis, nanoparticles, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system

Procedia PDF Downloads 117
793 Surgical Outcomes of Lung Cancer Surgery in Tasmania

Authors: Ayeshmanthe Rathnayake, Ashutosh Hardikar

Abstract:

Introduction: Lung cancer is the most common cause of cancer death in Australia, with more than 13000 cases per year. Until now, there has been a major deficiency of national comprehensive thoracic surgery data. The thoracic workload for surgeons as well as caseload per unit, is highly variable, with some centres performing less than 15 cases per annum, thus raising concerns about optimal care at low-volume sites. This is an attempt to review the outcomes of lung cancer surgery in Tasmania. Method: The objective of this study is to determine the surgical outcomes of lung cancer surgery at Royal Hobart Hospital (RHH) with the primary outcome of surgical mortality. Four hundred fifty-one cases were analysed retrospectively from 2010 to May 2022. Results: A total of 451 patients underwent thoracic surgery with a primary diagnosis of lung cancer. The primary outcome of 30-day mortality was <0.5%. The mean age was 65.3 years, with male predominance and a 4.2% prevalence of Indigenous Australians. The mean LOS was 7.5 days. The surgical approach was either VATS (50.3%) or Thoracotomy (49.7%), with a trend towards the former in recent years with an increase in the proportion of VATS from 18.2% to 51% (p<0.05) in complex resections since 2019. A corresponding reduction in conversion rate to open was observed (18% vs. 5.5%), and there were no deaths within this subgroup. Lung resections were divided into lobectomy (55.4%), wedge resection (36.8%), segmentectomy (2.9%) and pneumonectomy (4.9%). The RHH demonstrates good surgical outcomes for lung cancer and provides a sustainable service for Tasmania. Conclusion: This retrospective study reports the surgical outcomes of lung cancer surgery at the Royal Hobart Hospital, thereby providing insight into the surgical management of lung cancer in the state thus far. The state has been slow to catch up on the minimally invasive program, but the overall results have been comparable to most peers.

Keywords: lung cancer, thoracic surgery, lung resection, surgical outcomes

Procedia PDF Downloads 88
792 Opportunity Cost of Producing Sugarcane, Sweet Orange and Soybean in Sri Lankan Context: An Economic Analysis

Authors: Tharsinithevy Kirupananthan

Abstract:

This study analyzed the decision on growing three different crops which suit dry zone of Sri Lanka using the opportunity cost concept in economics. The variable cost of production of sugar cane, sweet orange, and soybean was 112,418.76, 13,463 and 10,928.08 Sri Lankan Rs. (LKR) per acre in the dry zone of Sri Lanka. The yield of the sugar cane, sweet orange, and soybean were 49.33 tons, 25,595 fruits, and 1032 kg per acre. The market price of the sugar cane, sweet orange, and soybean were 4200 LKR/ton, LKR 14.66 per fruit and LKR 89.69 per kg. The market value or the total income of the sugar cane, sweet orange, and soybean were LKR 207194.4, 283090.74, and 92560.08. The accounting profit of the sugar cane, sweet orange, and soybean was 94,775.64, 269,627.74, and 81,632 LKR per acre. Therefore, the opportunity cost of sugarcane per acre in terms of accounting profit was LKR. 269,627.74 from sweet orange and LKR 81,632 from soybean. The highest opportunity cost per acre in terms of accounting profit was found when soybean is produced instead of sweet orange. The opportunity cost which compared among the crops in terms of market value for sugar cane per acre was LKR 283090.74 of sweet orange and LKR 92560.08 of soybean. The highest opportunity cost both in terms of accounting profit and market value was found when growing soybean instead of sweet orange by using the resource per acre of land. The economic profit of sugar cane production in place of sweet orange was LKR -188315.1 per acre. The highest economic profit LKR 177067.66 was found when sweet orange is produced in place of soybean. A positive value of economic profit was found in all combination of sweet orange production without considering the first harvest duration of the crop.

Keywords: agricultural economics, crop, opportunity cost, Sri Lanka

Procedia PDF Downloads 338
791 Online Teacher Professional Development: An Extension of the Unified Theory of Acceptance and Use of Technology Model

Authors: Lovemore Motsi

Abstract:

The rapid pace of technological innovation, along with a global fascination with the internet, continues to result in a dominating call to integrate internet technologies in institutions of learning. However, the pressing question remains – how can online in-service training for teachers, support quality and success in professional development programmers. The aim of this study was to examine an integrated model that extended the Unified Theory of Acceptance and Use of Technology (UTAUT) with additional constructs – including attitude and behaviour intention – adopted from the Theory of Planned Behaviour (TPB) to answer the question. Data was collected from secondary school teachers at 10 selected schools in the Tshwane South district by means of the Statistical Package for Social Scientists (SPSS v 23.0), and the collected data was analysed quantitatively. The findings are congruent with model testing under conditions of volitional usage behaviour. In this regard, the role of facilitating condition variables is insignificant as a determinant of usage behaviour. Social norm variables also proved to be a weak determinant of behavioural intentions. Findings demonstrate that effort expectancy is the key determinant of online INSET usage. Based on these findings, the variable social influence and facilitating conditions are important factors in ensuring the acceptance of online INSET among teachers in selected secondary schools in the Tshwane South district.

Keywords: unified theory of acceptance and use of technology (UTAUT), teacher professional development, secondary schools, online INSET

Procedia PDF Downloads 213
790 A Multi Objective Reliable Location-Inventory Capacitated Disruption Facility Problem with Penalty Cost Solve with Efficient Meta Historic Algorithms

Authors: Elham Taghizadeh, Mostafa Abedzadeh, Mostafa Setak

Abstract:

Logistics network is expected that opened facilities work continuously for a long time horizon without any failure; but in real world problems, facilities may face disruptions. This paper studies a reliable joint inventory location problem to optimize cost of facility locations, customers’ assignment, and inventory management decisions when facilities face failure risks and doesn’t work. In our model we assume when a facility is out of work, its customers may be reassigned to other operational facilities otherwise they must endure high penalty costs associated with losing service. For defining the model closer to real world problems, the model is proposed based on p-median problem and the facilities are considered to have limited capacities. We define a new binary variable (Z_is) for showing that customers are not assigned to any facilities. Our problem involve a bi-objective model; the first one minimizes the sum of facility construction costs and expected inventory holding costs, the second one function that mention for the first one is minimizes maximum expected customer costs under normal and failure scenarios. For solving this model we use NSGAII and MOSS algorithms have been applied to find the pareto- archive solution. Also Response Surface Methodology (RSM) is applied for optimizing the NSGAII Algorithm Parameters. We compare performance of two algorithms with three metrics and the results show NSGAII is more suitable for our model.

Keywords: joint inventory-location problem, facility location, NSGAII, MOSS

Procedia PDF Downloads 521
789 Overview Studies of High Strength Self-Consolidating Concrete

Authors: Raya Harkouss, Bilal Hamad

Abstract:

Self-Consolidating Concrete (SCC) is considered as a relatively new technology created as an effective solution to problems associated with low quality consolidation. A SCC mix is defined as successful if it flows freely and cohesively without the intervention of mechanical compaction. The construction industry is showing high tendency to use SCC in many contemporary projects to benefit from the various advantages offered by this technology. At this point, a main question is raised regarding the effect of enhanced fluidity of SCC on the structural behavior of high strength self-consolidating reinforced concrete. A three phase research program was conducted at the American University of Beirut (AUB) to address this concern. The first two phases consisted of comparative studies conducted on concrete and mortar mixes prepared with second generation Sulphonated Naphtalene-based superplasticizer (SNF) or third generation Polycarboxylate Ethers-based superplasticizer (PCE). The third phase of the research program investigates and compares the structural performance of high strength reinforced concrete beam specimens prepared with two different generations of superplasticizers that formed the unique variable between the concrete mixes. The beams were designed to test and exhibit flexure, shear, or bond splitting failure. The outcomes of the experimental work revealed comparable resistance of beam specimens cast using self-compacting concrete and conventional vibrated concrete. The dissimilarities in the experimental values between the SCC and the control VC beams were minimal, leading to a conclusion, that the high consistency of SCC has little effect on the flexural, shear and bond strengths of concrete members.

Keywords: self-consolidating concrete (SCC), high-strength concrete, concrete admixtures, mechanical properties of hardened SCC, structural behavior of reinforced concrete beams

Procedia PDF Downloads 248
788 Two-Sided Information Dissemination in Takeovers: Disclosure and Media

Authors: Eda Orhun

Abstract:

Purpose: This paper analyzes a target firm’s decision to voluntarily disclose information during a takeover event and the effect of such disclosures on the outcome of the takeover. Such voluntary disclosures especially in the form of earnings forecasts made around takeover events may affect shareholders’ decisions about the target firm’s value and in return takeover result. This study aims to shed light on this question. Design/methodology/approach: The paper tries to understand the role of voluntary disclosures by target firms during a takeover event in the likelihood of takeover success both theoretically and empirically. A game-theoretical model is set up to analyze the voluntary disclosure decision of a target firm to inform the shareholders about its real worth. The empirical implication of model is tested by employing binary outcome models where the disclosure variable is obtained by identifying the target firms in the sample that provide positive news by issuing increasing management earnings forecasts. Findings: The model predicts that a voluntary disclosure of positive information by the target decreases the likelihood that the takeover succeeds. The empirical analysis confirms this prediction by showing that positive earnings forecasts by target firms during takeover events increase the probability of takeover failure. Overall, it is shown that information dissemination through voluntary disclosures by target firms is an important factor affecting takeover outcomes. Originality/Value: This study is the first to the author's knowledge that studies the impact of voluntary disclosures by the target firm during a takeover event on the likelihood of takeover success. The results contribute to information economics, corporate finance and M&As literatures.

Keywords: takeovers, target firm, voluntary disclosures, earnings forecasts, takeover success

Procedia PDF Downloads 313
787 Design and Evaluation of Corrective Orthosis Knee for Hyperextension

Authors: Valentina Narvaez Gaitan, Paula K. Rodriguez Ramirez, Derian D. Espinosa

Abstract:

Corrective orthosis has great importance in orthopedic treatments providing assistance in improving mobility and stability in order to improve the quality of life for a different patient. The corrective orthosis studied in this article can correct deformities, reduce pain, and improve the ability to perform daily activities. This work describes the design and evaluation of a corrective orthosis for knee hyperextension. This orthosis is capable of generating a progressive and variable alignment of the joint, limiting the range of motion according to medical criteria. The main objective was to design a corrective knee orthosis capable of correcting knee hyperextension progressively to return to its natural angle with greater economic affordability and adjustable size. The limiting mechanism is based on a goniometer to determine the desired angles. The orthosis was made of acrylic to reduce costs and maintenance; neoprene is also used to make comfortable contact; additionally, Velcro was used in order to adjust the orthosis for various sizes. Simulations of static and fatigue analysis of the mechanism were performed to verify its resistance and durability under normal conditions. A biomechanical gait study of gait was carried out on 10 healthy subjects without the orthosis and limiting their knee extension capacity in a normal gait cycle with the orthosis to observe the efficiency of the proposed system. In the results obtained, the knee angle curves show that the maximum extension angle was the established angle by the orthosis. Showing the efficiency of the proposed design for different leg sizes.

Keywords: biomechanical study, corrective orthosis, efficiency, goniometer, knee hyperextension.

Procedia PDF Downloads 73
786 Devulcanization of Waste Rubber Tyre Utilizing Deep Eutectic Solvents and Ultrasonic Energy

Authors: Ricky Saputra, Rashmi Walvekar, Mohammad Khalid, Kaveh Shahbaz, Suganti Ramarad

Abstract:

This particular study of interest aims to study the effect of coupling ultrasonic treatment with eutectic solvents in devulcanization process of waste rubber tyre. Specifically, three different types of Deep Eutectic Solvents (DES) were utilized, namely ChCl:Urea (1:2), ChCl:ZnCl₂ (1:2) and ZnCl₂:urea (2:7) in which their physicochemical properties were analysed and proven to have permissible water content that is less than 3.0 wt%, degradation temperature below 200ᵒC and freezing point below 60ᵒC. The mass ratio of rubber to DES was varied from 1:20-1:40, sonicated for 1 hour at 37 kHz and heated at variable time of 5-30 min at 180ᵒC. Energy dispersive x-rays (EDX) results revealed that the first two DESs give the highest degree of sulphur removal at 74.44 and 76.69% respectively with optimum heating time at 15 minutes whereby if prolonged, reformation of crosslink network would be experienced. Such is supported by the evidence shown by both FTIR and FESEM results where di-sulfide peak reappears at 30 minutes and morphological structures from 15 to 30 minutes change from smooth with high voidage to rigid with low voidage respectively. Furthermore, TGA curve reveals similar phenomena whereby at 15 minutes thermal decomposition temperature is at the lowest due to the decrease of molecular weight as a result of sulphur removal but increases back at 30 minutes. Type of bond change was also analysed whereby it was found that only di-sulphide bond was cleaved and which indicates partial-devulcanization. Overall, the results show that DES has a great potential to be used as devulcanizing solvent.

Keywords: crosslink network, devulcanization, eutectic solvents, reformation, ultrasonic

Procedia PDF Downloads 169
785 Associations between Parental Divorce Process Variables and Parent-Child Relationships Quality in Young Adulthood

Authors: Klara Smith-Etxeberria

Abstract:

main goal of this study was to analyze the predictive ability of some variables associated with the parental divorce process alongside attachment history with parents on both, mother-child and father-child relationship quality. Our sample consisted of 173 undergraduate and vocational school students from the Autonomous Community of the Basque Country. All of them belonged to a divorced family. Results showed that adequate maternal strategies during the divorce process (e.g.: stable, continuous and positive role as a mother) was the variable with greater predictive ability on mother-child relationships quality. In addition, secure attachment history with mother also predicted positive mother-child relationships. On the other hand, father-child relationship quality was predicted by adequate paternal strategies during the divorce process, such as his stable, continuous and positive role as a father, along with not badmouthing the mother and promoting good mother-child relationships. Furthermore, paternal negative emotional state due to divorce was positively associated with father-child relationships quality, and both, history of attachment with mother and with father predicted father-child relationships quality. In conclusion, our data indicate that both, paternal and maternal strategies for children´s adequate adjustment during the divorce process influence on mother-child and father-child relationships quality. However, these results suggest that paternal strategies during the divorce process have a greater predictive ability on father-child relationships quality, whereas maternal positive strategies during divorce determine positive mother-child relationships among young adults.

Keywords: father-child relationships quality, mother-child relationships quality, parental divorce process, young adulthood

Procedia PDF Downloads 251
784 Relationship between Gully Development and Characteristics of Drainage Area in Semi-Arid Region, NW Iran

Authors: Ali Reza Vaezi, Ouldouz Bakhshi Rad

Abstract:

Gully erosion is a widespread and often dramatic form of soil erosion caused by water during and immediately after heavy rainfall. It occurs when flowing surface water is channelled across unprotected land and washes away the soil along the drainage lines. The formation of gully is influenced by various factors, including climate, drainage surface area, slope gradient, vegetation cover, land use, and soil properties. It is a very important problem in semi-arid regions, where soils have lower organic matter and are weakly aggregated. Intensive agriculture and tillage along the slope can accelerate soil erosion by water in the region. There is little information on the development of gully erosion in agricultural rainfed areas. Therefore, this study was carried out to investigate the relationship between gully erosion and morphometric characteristics of the drainage area and the effects of soil properties and soil management factors (land use and tillage method) on gully development. A field study was done in a 900 km2 agricultural area in Hshtroud township located in the south of East Azarbijan province, NW Iran. Toward this, two hundred twenty-two gullies created in rainfed lands were found in the area. Some properties of gullies, consisting of length, width, depth, height difference, cross section area, and volume, were determined. Drainage areas for each or some gullies were determined, and their boundaries were drawn. Additionally, the surface area of each drainage, land use, tillage direction, and soil properties that may affect gully formation were determined. The soil erodibility factor (K) defined in the Universal Soil Loss Equation (USLE) was estimated based on five soil properties (silt and very fine sand, coarse sand, organic matter, soil structure code, and soil permeability). Gully development in each drainage area was quantified using its volume and soil loss. The dependency of gully development on drainage area characteristics (surface area, land use, tillage direction, and soil properties) was determined using correlation matrix analysis. Based on the results, gully length was the most important morphometric characteristic indicating the development of gully erosion in the lands. Gully development in the area was related to slope gradient (r= -0.26), surface area (r= 0.71), the area of rainfed lands (r= 0.23), and the area of rainfed tilled along the slope (r= 0.24). Nevertheless, its correlation with the area of pasture and soil erodibility factor (K) was not significant. Among the characteristics of drainage area, surface area is the major factor controlling gully volume in the agricultural land. No significant correlation was found between gully erosion and soil erodibility factor (K) estimated by the Universal Soil Loss Equation (USLE). It seems the estimated soil erodibility can’t describe the susceptibility of the study soils to the gully erosion process. In these soils, aggregate stability and soil permeability are the two soil physical properties that affect the actual soil erodibility and in consequence, these soil properties can control gully erosion in the rainfed lands.

Keywords: agricultural area, gully properties, soil structure, USLE

Procedia PDF Downloads 71
783 Complex Decision Rules in Quality Assurance Processes for Quick Service Restaurant Industry: Human Factors Determining Acceptability

Authors: Brandon Takahashi, Marielle Hanley, Gerry Hanley

Abstract:

The large-scale quick-service restaurant industry is a complex business to manage optimally. With over 40 suppliers providing different ingredients for food preparation and thousands of restaurants serving over 50 unique food offerings across a wide range of regions, the company must implement a quality assurance process. Businesses want to deliver quality food efficiently, reliably, and successfully at a low cost that the public wants to buy. They also want to make sure that their food offerings are never unsafe to eat or of poor quality. A good reputation (and profitable business) developed over the years can be gone in an instant if customers fall ill eating your food. Poor quality also results in food waste, and the cost of corrective actions is compounded by the reduction in revenue. Product compliance evaluation assesses if the supplier’s ingredients are within compliance with the specifications of several attributes (physical, chemical, organoleptic) that a company will test to ensure that a quality, safe to eat food is given to the consumer and will deliver the same eating experience in all parts of the country. The technical component of the evaluation includes the chemical and physical tests that produce numerical results that relate to shelf-life, food safety, and organoleptic qualities. The psychological component of the evaluation includes organoleptic, which is acting on or involving the use of the sense organs. The rubric for product compliance evaluation has four levels: (1) Ideal: Meeting or exceeding all technical (physical and chemical), organoleptic, & psychological specifications. (2) Deviation from ideal but no impact on quality: Not meeting or exceeding some technical and organoleptic/psychological specifications without impact on consumer quality and meeting all food safety requirements (3) Acceptable: Not meeting or exceeding some technical and organoleptic/psychological specifications resulting in reduction of consumer quality but not enough to lessen demand and meeting all food safety requirements (4) Unacceptable: Not meeting food safety requirements, independent of meeting technical and organoleptic specifications or meeting all food safety requirements but product quality results in consumer rejection of food offering. Sampling of products and consumer tastings within the distribution network is a second critical element of the quality assurance process and are the data sources for the statistical analyses. Each finding is not independently assessed with the rubric. For example, the chemical data will be used to back up/support any inferences on the sensory profiles of the ingredients. Certain flavor profiles may not be as apparent when mixed with other ingredients, which leads to weighing specifications differentially in the acceptability decision. Quality assurance processes are essential to achieve that balance of quality and profitability by making sure the food is safe and tastes good but identifying and remediating product quality issues before they hit the stores. Comprehensive quality assurance procedures implement human factors methodologies, and this report provides recommendations for systemic application of quality assurance processes for quick service restaurant services. This case study will review the complex decision rubric and evaluate processes to ensure the right balance of cost, quality, and safety is achieved.

Keywords: decision making, food safety, organoleptics, product compliance, quality assurance

Procedia PDF Downloads 185
782 The Effects of Passive and Active Recoveries on Responses of Platelet Indices and Hemodynamic Variables to Resistance Exercise

Authors: Mohammad Soltani, Sajad Ahmadizad, Fatemeh Hoseinzadeh, Atefe Sarvestan

Abstract:

The exercise recovery is an important variable in designing resistance exercise training. This study determined the effects of passive and active recoveries on responses of platelet indices and hemodynamic variables to resistance exercise. Twelve healthy subjects (six men and six women, age, 25.4 ±2.5 yrs) performed two types of resistance exercise protocols (six exercises including upper- and lower-body parts) at two separate sessions with one-week intervening. First resistance protocol included three sets of six repetitions at 80% of 1RM with 2 min passive rest between sets and exercises; while, the second protocol included three sets of six repetitions at 60% of 1RM followed by active recovery included six repetitions of the same exercise at 20% of 1RM. The exercise volume was equalized. Three blood samples were taken before exercise, immediately after exercise and after 1-hour recovery, and analyzed for fibrinogen and platelet indices. Blood pressure (BP), heart rate (HR) and rate pressure product (RPP), were measured before, immediately after exercise and every 5 minutes during recovery. Data analyzes showed a significant increase in SBP (systolic blood pressure), HR, rate of pressure product (RPP) and PLT in response to resistance exercise (P<0.05) and that changes for HR and RPP were significantly different between two protocols (P<0.05). Furthermore, MPV and P_LCR did not change in response to resistance exercise, though significant reductions were observed after 1h recovery compared to before and after exercise (P<0.05). No significant changes in fibrinogen and PDW following two types of resistance exercise protocols were observed (P>0.05). On the other hand, no significant differences in platelet indices were found between the two protocols (P>0.05). Resistance exercise induces changes in platelet indices and hemodynamic variables, and that these changes are not related to the type of recovery and returned to normal levels after 1h recovery.

Keywords: hemodynamic variables, platelet indices, resistance exercise, recovery intensity

Procedia PDF Downloads 133
781 Relationship between Perceived Level of Emotional Intelligence and Organizational Role Stress of Fire Fighters in Mumbai

Authors: Payal Maheshwari, Bansari Shah

Abstract:

The research aimed to study the level of emotional intelligence (EI) and organizational role stress (ORS) of fire-fighters and the relationship between the two variables. Hundred and twenty fire-fighters were selected from different fire stations of Mumbai by purposive sampling. The firefighters who had the basic training, a minimum experience of 2 years and had been on the field during a crisis situation were selected for the study. The firefighters selected ranged from 23-58 years of age, and the number of years of experience ranged from 2 to 33 years. The findings of the study revealed that majority of the firefighters perceived themselves to be at an above average (57) and high (58) level of EI (M=429.35, SD=38.712). Domain-wise analysis disclosed that compared to self-awareness (92) and relationship management (93), more number of participants perceived themselves in the high category in the domains of self-management (108) and social management (106). Further, examination of the subdomain scores conveyed that a large number of participants rated themselves in the average level of these skills of accurate self-assessment (50), emotional self-control (50), adaptability (56) initiative (41), influence (66), change catalyst (53), and conflict management (50). With relation to the stress variable, it was found that almost half the number of the participants (59) rated themselves as having an average level of stress (M=137.44, SD=28.800). In most of the domains, majority of the participants perceived themselves as having an average level of stress, while in the domain of role isolation, self-role distance, and role ambiguity, majority of the firefighters rated themselves as having a low level of stress. A strong negative correlation (r=-.360**, p=.000) was found between EI and ORS. This study is a contribution to the literature and has implications for fire-fighters at the personal level, for the policymakers, and the fire department.

Keywords: emotional intelligence, organizational role stress, firefighters, relationship

Procedia PDF Downloads 108
780 Flow and Heat Transfer Analysis of Copper-Water Nanofluid with Temperature Dependent Viscosity past a Riga Plate

Authors: Fahad Abbasi

Abstract:

Flow of electrically conducting nanofluids is of pivotal importance in countless industrial and medical appliances. Fluctuations in thermophysical properties of such fluids due to variations in temperature have not received due attention in the available literature. Present investigation aims to fill this void by analyzing the flow of copper-water nanofluid with temperature dependent viscosity past a Riga plate. Strong wall suction and viscous dissipation have also been taken into account. Numerical solutions for the resulting nonlinear system have been obtained. Results are presented in the graphical and tabular format in order to facilitate the physical analysis. An estimated expression for skin friction coefficient and Nusselt number are obtained by performing linear regression on numerical data for embedded parameters. Results indicate that the temperature dependent viscosity alters the velocity, as well as the temperature of the nanofluid and, is of considerable importance in the processes where high accuracy is desired. Addition of copper nanoparticles makes the momentum boundary layer thinner whereas viscosity parameter does not affect the boundary layer thickness. Moreover, the regression expressions indicate that magnitude of rate of change in effective skin friction coefficient and Nusselt number with respect to nanoparticles volume fraction is prominent when compared with the rate of change with variable viscosity parameter and modified Hartmann number.

Keywords: heat transfer, peristaltic flows, radially varying magnetic field, curved channel

Procedia PDF Downloads 162
779 Development of a Smart System for Measuring Strain Levels of Natural Gas and Petroleum Pipelines on Earthquake Fault Lines in Turkiye

Authors: Ahmet Yetik, Seyit Ali Kara, Cevat Özarpa

Abstract:

Load changes occur on natural gas and oil pipelines due to natural disasters. The displacement of the soil around the natural gas and oil pipes due to situations that may cause erosion, such as earthquakes, landslides, and floods, is the source of this load change. The exposure of natural gas and oil pipes to variable loads causes deformation, cracks, and breaks in these pipes. Cracks and breaks on the pipes cause damage to people and the environment due to reasons such as explosions. Especially with the examinations made after natural disasters, it can be easily understood which of the pipes has more damage in the regions followed. It has been determined that the earthquakes in Turkey caused permanent damage to the pipelines. This project was designed and realized because it was determined that there were cracks and gas leaks in the insulation gaskets placed in the pipelines, especially at the junction points. In this study, A new SCADA (Supervisory Control and Data Acquisition) application has been developed to monitor load changes caused by natural disasters. The newly developed SCADA application monitors the changes in the x, y, and z axes of the stresses occurring in the pipes with the help of strain gauge sensors placed on the pipes. For the developed SCADA system, test setups in accordance with the standards were created during the fieldwork. The test setups created were integrated into the SCADA system, and the system was followed up. Thanks to the SCADA system developed with the field application, the load changes that will occur on the natural gas and oil pipes are instantly monitored, and the accumulations that may create a load on the pipes and their surroundings are immediately intervened, and new risks that may arise are prevented. It has contributed to energy supply security, asset management, pipeline holistic management, and sustainability.

Keywords: earthquake, natural gas pipes, oil pipes, strain measurement, stress measurement, landslide

Procedia PDF Downloads 68
778 Analyzing the Results of Buildings Energy Audit by Using Grey Set Theory

Authors: Tooraj Karimi, Mohammadreza Sadeghi Moghadam

Abstract:

Grey set theory has the advantage of using fewer data to analyze many factors, and it is therefore more appropriate for system study rather than traditional statistical regression which require massive data, normal distribution in the data and few variant factors. So, in this paper grey clustering and entropy of coefficient vector of grey evaluations are used to analyze energy consumption in buildings of the Oil Ministry in Tehran. In fact, this article intends to analyze the results of energy audit reports and defines most favorable characteristics of system, which is energy consumption of buildings, and most favorable factors affecting these characteristics in order to modify and improve them. According to the results of the model, ‘the real Building Load Coefficient’ has been selected as the most important system characteristic and ‘uncontrolled area of the building’ has been diagnosed as the most favorable factor which has the greatest effect on energy consumption of building. Grey clustering in this study has been used for two purposes: First, all the variables of building relate to energy audit cluster in two main groups of indicators and the number of variables is reduced. Second, grey clustering with variable weights has been used to classify all buildings in three categories named ‘no standard deviation’, ‘low standard deviation’ and ‘non- standard’. Entropy of coefficient vector of Grey evaluations is calculated to investigate greyness of results. It shows that among the 38 buildings surveyed in terms of energy consumption, 3 cases are in standard group, 24 cases are in ‘low standard deviation’ group and 11 buildings are completely non-standard. In addition, clustering greyness of 13 buildings is less than 0.5 and average uncertainly of clustering results is 66%.

Keywords: energy audit, grey set theory, grey incidence matrixes, grey clustering, Iran oil ministry

Procedia PDF Downloads 370
777 Neuroecological Approach for Anthropological Studies in Archaeology

Authors: Kalangi Rodrigo

Abstract:

The term Neuroecology elucidates the study of customizable variation in cognition and the brain. Subject marked the birth since 1980s, when researches began to apply methods of comparative evolutionary biology to cognitive processes and the underlying neural mechanisms of cognition. In Archaeology and Anthropology, we observe behaviors such as social learning skills, innovative feeding and foraging, tool use and social manipulation to determine the cognitive processes of ancient mankind. Depending on the brainstem size was used as a control variable, and phylogeny was controlled using independent contrasts. Both disciplines need to enriched with comparative literature and neurological experimental, behavioral studies among tribal peoples as well as primate groups which will lead the research to a potential end. Neuroecology examines the relations between ecological selection pressure and mankind or sex differences in cognition and the brain. The goal of neuroecology is to understand how natural law acts on perception and its neural apparatus. Furthermore, neuroecology will eventually lead both principal disciplines to Ethology, where human behaviors and social management studies from a biological perspective. It can be either ethnoarchaeological or prehistoric. Archaeology should adopt general approach of neuroecology, phylogenetic comparative methods can be used in the field, and new findings on the cognitive mechanisms and brain structures involved mating systems, social organization, communication and foraging. The contribution of neuroecology to archaeology and anthropology is the information it provides on the selective pressures that have influenced the evolution of cognition and brain structure of the mankind. It will shed a new light to the path of evolutionary studies including behavioral ecology, primate archaeology and cognitive archaeology.

Keywords: Neuroecology, Archaeology, Brain Evolution, Cognitive Archaeology

Procedia PDF Downloads 117