Search results for: reactive power cost
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11930

Search results for: reactive power cost

830 An Anthropometric Index Capable of Differentiating Morbid Obesity from Obesity and Metabolic Syndrome in Children

Authors: Mustafa Metin Donma

Abstract:

Circumference measurements are important because they are easily obtained values for the identification of the weight gain without determining body fat. They may give meaningful information about the varying stages of obesity. Besides, some formulas may be derived from a number of body circumference measurements to estimate body fat. Waist (WC), hip (HC) and neck (NC) circumferences are currently the most frequently used measurements. The aim of this study was to develop a formula derived from these three anthropometric measurements, each giving a valuable information independently, to question whether their combined power within a formula was capable of being helpful for the differential diagnosis of morbid obesity without metabolic syndrome (MetS) from MetS. One hundred and eighty seven children were recruited from the pediatrics outpatient clinic of Tekirdag Namik Kemal University Faculty of Medicine. The parents of the participants were informed about asked to fill and sign the consent forms. The study was carried out according to the Helsinki Declaration. The study protocol was approved by the institutional non-interventional ethics committee. The study population was divided into four groups as normal-body mass index (N-BMI), obese (OB), morbid obese (MO) and MetS, which were composed of 35, 44, 75 and 33 children, respectively. Age- and gender-adjusted BMI percentile values were used for the classification of groups. The children in MetS group were selected based upon the nature of the MetS components described as MetS criteria. Anthropometric measurements, laboratory analysis and statistical evaluation confined to study population were performed. Body mass index values were calculated. A circumference index, advanced Donma circumference index (ADCI) was introduced as WC*HC/NC. The statistical significance degree was chosen as p value smaller than 0.05. Body mass index values were 17.7±2.8, 24.5±3.3, 28.8±5.7, 31.4±8.0 kg/m2, for N-BMI, OB, MO, MetS groups, respectively. The corresponding values for ADCI were 165±35, 240±42, 270±55, and 298±62. Significant differences were obtained between BMI values of N-BMI and OB, MO, MetS groups (p=0.001). Obese group BMI values also differed from MO group BMI values (p=0.001). However, the increase in MetS group compared to MO group was not significant (p=0.091). For the new index, significant differences were obtained between N-BMI and OB, MO, MetS groups (p=0.001). Obese group ADCI values also differed from MO group ADCI values (p=0.015). A significant difference between MO and MetS groups was detected (p=0.043). The correlation coefficient value and the significance check of the correlation was found between BMI and ADCI as r=0.0883 and p=0.001 upon consideration of all participants. In conclusion, in spite of the strong correlation between BMI and ADCI values obtained when all groups were considered, ADCI, but not BMI, was the index, which was capable of differentiating cases with morbid obesity from cases with morbid obesity and MetS.

Keywords: anthropometry, body mass index, child, circumference, metabolic syndrome, obesity

Procedia PDF Downloads 50
829 Barriers to Entry: The Pitfall of Charter School Accountability

Authors: Ian Kingsbury

Abstract:

The rapid expansion of charter schools (public schools that receive government but do not face the same regulations as traditional public schools) over the preceding two decades has raised concerns over the potential for graft and fraud. These concerns are largely justified: Incidents of financial crime and mismanagement are not unheard of, and the charter sector has become a darling of hedge fund managers. In response, several states have strengthened their charter school regulatory regimes. Imposing regulations and attempting to increase accountability seem like sensible measures, and perhaps they are necessary. However, increased regulation may come at the cost of imposing barriers to entry. Specifically, increased regulation often entails evidence for a high likelihood of fiscal solvency. That should theoretically entail access to capital in the short-term, which may systematically preclude Black or Hispanic applicants from opening charter schools. Moreover, increased regulation necessarily entails more red tape. The institutional wherewithal and the number of hours required to complete an application to open a charter school might favor those who have partnered with an education service provider, specifically a charter management organization (CMO) or education management organization (EMO). These potential barriers to entry pose a significant policy concern. Just as policymakers hope to increase the share of minority teachers and principals, they should sensibly care whether individuals who open charter schools look like the students in that school. Moreover, they might be concerned if successful applications in states with stringent regulations are overwhelmingly affiliated with education service providers. One of the original missions of charter schools was to serve as a laboratory of innovation. Approving only those applications affiliated with education service providers (and in effect establishing a parallel network of schools rather than a diverse marketplace of schools) undermines that mission. Data and methods: The analysis examines more than 2,000 charter school applications from 15 states. It compares the outcomes of applications from states with a strong regulatory environment (those with high scores) from NACSA-the National Association of Charter School Authorizers- to applications from states with a weak regulatory environment (those with a low NACSA score). If the hypothesis is correct, applicants not affiliated with an ESP are more likely to be rejected in high-regulation states compared to those affiliated with an ESP, and minority candidates not affiliated with an education service provider (ESP) are particularly likely to be rejected. Initial returns indicate that the hypothesis holds. More applications in low NASCA-scoring Arizona come from individuals not associated with an ESP, and those individuals are as likely to be accepted as those affiliated with an ESP. On the other hand, applicants in high-NACSA scoring Indiana and Ohio are more than 20 percentage points more likely to be accepted if they are affiliated with an ESP, and the effect is particularly pronounced for minority candidates. These findings should spur policymakers to consider the drawbacks of charter school accountability and consider accountability regimes that do not impose barriers to entry.

Keywords: accountability, barriers to entry, charter schools, choice

Procedia PDF Downloads 132
828 Robotic Process Automation in Accounting and Finance Processes: An Impact Assessment of Benefits

Authors: Rafał Szmajser, Katarzyna Świetla, Mariusz Andrzejewski

Abstract:

Robotic process automation (RPA) is a technology of repeatable business processes performed using computer programs, robots that simulate the work of a human being. This approach assumes replacing an existing employee with the use of dedicated software (software robots) to support activities, primarily repeated and uncomplicated, characterized by a low number of exceptions. RPA application is widespread in modern business services, particularly in the areas of Finance, Accounting and Human Resources Management. By utilizing this technology, the effectiveness of operations increases while reducing workload, minimizing possible errors in the process, and as a result, bringing measurable decrease in the cost of providing services. Regardless of how the use of modern information technology is assessed, there are also some doubts as to whether we should replace human activities in the implementation of the automation in business processes. After the initial awe for the new technological concept, a reflection arises: to what extent does the implementation of RPA increase the efficiency of operations or is there a Business Case for implementing it? If the business case is beneficial, in which business processes is the greatest potential for RPA? A closer look at these issues was provided by in this research during which the respondents’ view of the perceived advantages resulting from the use of robotization and automation in financial and accounting processes was verified. As a result of an online survey addressed to over 500 respondents from international companies, 162 complete answers were returned from the most important types of organizations in the modern business services industry, i.e. Business or IT Process Outsourcing (BPO/ITO), Shared Service Centers (SSC), Consulting/Advisory and their customers. Answers were provided by representatives of the positions in their organizations: Members of the Board, Directors, Managers and Experts/Specialists. The structure of the survey allowed the respondents to supplement the survey with additional comments and observations. The results formed the basis for the creation of a business case calculating tangible benefits associated with the implementation of automation in the selected financial processes. The results of the statistical analyses carried out with regard to revenue growth confirmed the correctness of the hypothesis that there is a correlation between job position and the perception of the impact of RPA implementation on individual benefits. Second hypothesis (H2) that: There is a relationship between the kind of company in the business services industry and the reception of the impact of RPA on individual benefits was thus not confirmed. Based results of survey authors performed simulation of business case for implementation of RPA in selected Finance and Accounting Processes. Calculated payback period was diametrically different ranging from 2 months for the Account Payables process with 75% savings and in the extreme case for the process Taxes implementation and maintenance costs exceed the savings resulting from the use of the robot.

Keywords: automation, outsourcing, business process automation, process automation, robotic process automation, RPA, RPA business case, RPA benefits

Procedia PDF Downloads 117
827 A Fermatean Fuzzy MAIRCA Approach for Maintenance Strategy Selection of Process Plant Gearbox Using Sustainability Criteria

Authors: Soumava Boral, Sanjay K. Chaturvedi, Ian Howard, Kristoffer McKee, V. N. A. Naikan

Abstract:

Due to strict regulations from government to enhance the possibilities of sustainability practices in industries, and noting the advances in sustainable manufacturing practices, it is necessary that the associated processes are also sustainable. Maintenance of large scale and complex machines is a pivotal task to maintain the uninterrupted flow of manufacturing processes. Appropriate maintenance practices can prolong the lifetime of machines, and prevent associated breakdowns, which subsequently reduces different cost heads. Selection of the best maintenance strategies for such machines are considered as a burdensome task, as they require the consideration of multiple technical criteria, complex mathematical calculations, previous fault data, maintenance records, etc. In the era of the fourth industrial revolution, organizations are rapidly changing their way of business, and they are giving their utmost importance to sensor technologies, artificial intelligence, data analytics, automations, etc. In this work, the effectiveness of several maintenance strategies (e.g., preventive, failure-based, reliability centered, condition based, total productive maintenance, etc.) related to a large scale and complex gearbox, operating in a steel processing plant is evaluated in terms of economic, social, environmental and technical criteria. As it is not possible to obtain/describe some criteria by exact numerical values, these criteria are evaluated linguistically by cross-functional experts. Fuzzy sets are potential soft-computing technique, which has been useful to deal with linguistic data and to provide inferences in many complex situations. To prioritize different maintenance practices based on the identified sustainable criteria, multi-criteria decision making (MCDM) approaches can be considered as potential tools. Multi-Attributive Ideal Real Comparative Analysis (MAIRCA) is a recent addition in the MCDM family and has proven its superiority over some well-known MCDM approaches, like TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and ELECTRE (ELimination Et Choix Traduisant la REalité). It has a simple but robust mathematical approach, which is easy to comprehend. On the other side, due to some inherent drawbacks of Intuitionistic Fuzzy Sets (IFS) and Pythagorean Fuzzy Sets (PFS), recently, the use of Fermatean Fuzzy Sets (FFSs) has been proposed. In this work, we propose the novel concept of FF-MAIRCA. We obtain the weights of the criteria by experts’ evaluation and use them to prioritize the different maintenance practices according to their suitability by FF-MAIRCA approach. Finally, a sensitivity analysis is carried out to highlight the robustness of the approach.

Keywords: Fermatean fuzzy sets, Fermatean fuzzy MAIRCA, maintenance strategy selection, sustainable manufacturing, MCDM

Procedia PDF Downloads 125
826 Foundations for Global Interactions: The Theoretical Underpinnings of Understanding Others

Authors: Randall E. Osborne

Abstract:

In a course on International Psychology, 8 theoretical perspectives (Critical Psychology, Liberation Psychology, Post-Modernism, Social Constructivism, Social Identity Theory, Social Reduction Theory, Symbolic Interactionism, and Vygotsky’s Sociocultural Theory) are used as a framework for getting students to understand the concept of and need for Globalization. One of critical psychology's main criticisms of conventional psychology is that it fails to consider or deliberately ignores the way power differences between social classes and groups can impact the mental and physical well-being of individuals or groups of people. Liberation psychology, also known as liberation social psychology or psicología social de la liberación, is an approach to psychological science that aims to understand the psychology of oppressed and impoverished communities by addressing the oppressive sociopolitical structure in which they exist. Postmodernism is largely a reaction to the assumed certainty of scientific, or objective, efforts to explain reality. It stems from a recognition that reality is not simply mirrored in human understanding of it, but rather, is constructed as the mind tries to understand its own particular and personal reality. Lev Vygotsky argued that all cognitive functions originate in, and must therefore be explained as products of social interactions and that learning was not simply the assimilation and accommodation of new knowledge by learners. Social Identity Theory discusses the implications of social identity for human interactions with and assumptions about other people. Social Identification Theory suggests people: (1) categorize—people find it helpful (humans might be perceived as having a need) to place people and objects into categories, (2) identify—people align themselves with groups and gain identity and self-esteem from it, and (3) compare—people compare self to others. Social reductionism argues that all behavior and experiences can be explained simply by the affect of groups on the individual. Symbolic interaction theory focuses attention on the way that people interact through symbols: words, gestures, rules, and roles. Meaning evolves from human their interactions in their environment and with people. Vygotsky’s sociocultural theory of human learning describes learning as a social process and the origination of human intelligence in society or culture. The major theme of Vygotsky’s theoretical framework is that social interaction plays a fundamental role in the development of cognition. This presentation will discuss how these theoretical perspectives are incorporated into a course on International Psychology, a course on the Politics of Hate, and a course on the Psychology of Prejudice, Discrimination and Hate to promote student thinking in a more ‘global’ manner.

Keywords: globalization, international psychology, society and culture, teaching interculturally

Procedia PDF Downloads 227
825 A Novel Harmonic Compensation Algorithm for High Speed Drives

Authors: Lakdar Sadi-Haddad

Abstract:

The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.

Keywords: active harmonic compensation, eddy current losses, high speed machine

Procedia PDF Downloads 376
824 Green Extraction Technologies of Flavonoids Containing Pharmaceuticals

Authors: Lamzira Ebralidze, Aleksandre Tsertsvadze, Dali Berashvili, Aliosha Bakuridze

Abstract:

Nowadays, there is an increasing demand for biologically active substances from vegetable, animal, and mineral resources. In terms of the use of natural compounds, pharmaceutical, cosmetic, and nutrition industry has big interest. The biggest drawback of conventional extraction methods is the need to use a large volume of organic extragents. The removal of the organic solvent is a multi-stage process. And their absolute removal cannot be achieved, and they still appear in the final product as impurities. A large amount of waste containing organic solvent damages not only human health but also has the harmful effects of the environment. Accordingly, researchers are focused on improving the extraction methods, which aims to minimize the use of organic solvents and energy sources, using alternate solvents and renewable raw materials. In this context, green extraction principles were formed. Green Extraction is a need of today’s environment. Green Extraction is the concept, and it totally corresponds to the challenges of the 21st century. The extraction of biologically active compounds based on green extraction principles is vital from the view of preservation and maintaining biodiversity. Novel technologies of green extraction are known, such as "cold methods" because during the extraction process, the temperature is relatively lower, and it doesn’t have a negative impact on the stability of plant compounds. Novel technologies provide great opportunities to reduce or replace the use of organic toxic solvents, the efficiency of the process, enhance excretion yield, and improve the quality of the final product. The objective of the research is the development of green technologies of flavonoids containing preparations. Methodology: At the first stage of the research, flavonoids containing preparations (Tincture Herba Leonuri, flamine, rutine) were prepared based on conventional extraction methods: maceration, bismaceration, percolation, repercolation. At the same time, the same preparations were prepared based on green technologies, microwave-assisted, UV extraction methods. Product quality characteristics were evaluated by pharmacopeia methods. At the next stage of the research technological - economic characteristics and cost efficiency of products prepared based on conventional and novel technologies were determined. For the extraction of flavonoids, water is used as extragent. Surface-active substances are used as co-solvent in order to reduce surface tension, which significantly increases the solubility of polyphenols in water. Different concentrations of water-glycerol mixture, cyclodextrin, ionic solvent were used for the extraction process. In vitro antioxidant activity will be studied by the spectrophotometric method, using DPPH (2,2-diphenyl-1- picrylhydrazyl) as an antioxidant assay. The advantage of green extraction methods is also the possibility of obtaining higher yield in case of low temperature, limitation extraction process of undesirable compounds. That is especially important for the extraction of thermosensitive compounds and maintaining their stability.

Keywords: extraction, green technologies, natural resources, flavonoids

Procedia PDF Downloads 110
823 Experimental and Numerical Investigation on the Torque in a Small Gap Taylor-Couette Flow with Smooth and Grooved Surface

Authors: L. Joseph, B. Farid, F. Ravelet

Abstract:

Fundamental studies were performed on bifurcation, instabilities and turbulence in Taylor-Couette flow and applied to many engineering applications like astrophysics models in the accretion disks, shrouded fans, and electric motors. Such rotating machinery performances need to have a better understanding of the fluid flow distribution to quantify the power losses and the heat transfer distribution. The present investigation is focused on high gap ratio of Taylor-Couette flow with high rotational speeds, for smooth and grooved surfaces. So far, few works has been done in a very narrow gap and with very high rotation rates and, to the best of our knowledge, not with this combination with grooved surface. We study numerically the turbulent flow between two coaxial cylinders where R1 and R2 are the inner and outer radii respectively, where only the inner is rotating. The gap between the rotor and the stator varies between 0.5 and 2 mm, which corresponds to a radius ratio η = R1/R2 between 0.96 and 0.99 and an aspect ratio Γ= L/d between 50 and 200, where L is the length of the rotor and d being the gap between the two cylinders. The scaling of the torque with the Reynolds number is determined at different gaps for different smooth and grooved surfaces (and also with different number of grooves). The fluid in the gap is air. Re varies between 8000 and 30000. Another dimensionless parameter that plays an important role in the distinction of the regime of the flow is the Taylor number that corresponds to the ratio between the centrifugal forces and the viscous forces (from 6.7 X 105 to 4.2 X 107). The torque will be first evaluated with RANS and U-RANS models, and compared to empirical models and experimental results. A mesh convergence study has been done for each rotor-stator combination. The results of the torque are compared to different meshes in 2D dimensions. For the smooth surfaces, the models used overestimate the torque compared to the empirical equations that exist in the bibliography. The closest models to the empirical models are those solving the equations near to the wall. The greatest torque achieved with grooved surface. The tangential velocity in the gap was always higher in between the rotor and the stator and not on the wall of rotor. Also the greater one was in the groove in the recirculation zones. In order to avoid endwall effects, long cylinders are used in our setup (100 mm), torque is measured by a co-rotating torquemeter. The rotor is driven by an air turbine of an automotive turbo-compressor for high angular velocities. The results of the experimental measurements are at rotational speed of up to 50 000 rpm. The first experimental results are in agreement with numerical ones. Currently, quantitative study is performed on grooved surface, to determine the effect of number of grooves on the torque, experimentally and numerically.

Keywords: Taylor-Couette flow, high gap ratio, grooved surface, high speed

Procedia PDF Downloads 382
822 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 100
821 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 103
820 SAFECARE: Integrated Cyber-Physical Security Solution for Healthcare Critical Infrastructure

Authors: Francesco Lubrano, Fabrizio Bertone, Federico Stirano

Abstract:

Modern societies strongly depend on Critical Infrastructures (CI). Hospitals, power supplies, water supplies, telecommunications are just few examples of CIs that provide vital functions to societies. CIs like hospitals are very complex environments, characterized by a huge number of cyber and physical systems that are becoming increasingly integrated. Ensuring a high level of security within such critical infrastructure requires a deep knowledge of vulnerabilities, threats, and potential attacks that may occur, as well as defence and prevention or mitigation strategies. The possibility to remotely monitor and control almost everything is pushing the adoption of network-connected devices. This implicitly introduces new threats and potential vulnerabilities, posing a risk, especially to those devices connected to the Internet. Modern medical devices used in hospitals are not an exception and are more and more being connected to enhance their functionalities and easing the management. Moreover, hospitals are environments with high flows of people, that are difficult to monitor and can somehow easily have access to the same places used by the staff, potentially creating damages. It is therefore clear that physical and cyber threats should be considered, analysed, and treated together as cyber-physical threats. This means that an integrated approach is required. SAFECARE, an integrated cyber-physical security solution, tries to respond to the presented issues within healthcare infrastructures. The challenge is to bring together the most advanced technologies from the physical and cyber security spheres, to achieve a global optimum for systemic security and for the management of combined cyber and physical threats and incidents and their interconnections. Moreover, potential impacts and cascading effects are evaluated through impact propagation models that rely on modular ontologies and a rule-based engine. Indeed, SAFECARE architecture foresees i) a macroblock related to cyber security field, where innovative tools are deployed to monitor network traffic, systems and medical devices; ii) a physical security macroblock, where video management systems are coupled with access control management, building management systems and innovative AI algorithms to detect behavior anomalies; iii) an integration system that collects all the incoming incidents, simulating their potential cascading effects, providing alerts and updated information regarding assets availability.

Keywords: cyber security, defence strategies, impact propagation, integrated security, physical security

Procedia PDF Downloads 145
819 Impact of Financial Factors on Total Factor Productivity: Evidence from Indian Manufacturing Sector

Authors: Lopamudra D. Satpathy, Bani Chatterjee, Jitendra Mahakud

Abstract:

The rapid economic growth in terms of output and investment necessitates a substantial growth of Total Factor Productivity (TFP) of firms which is an indicator of an economy’s technological change. The strong empirical relationship between financial sector development and economic growth clearly indicates that firms financing decisions do affect their levels of output via their investment decisions. Hence it establishes a linkage between the financial factors and productivity growth of the firms. To achieve the smooth and continuous economic growth over time, it is imperative to understand the financial channel that serves as one of the vital channels. The theoretical or logical argument behind this linkage is that when the internal financial capital is not sufficient enough for the investment, the firms always rely upon the external sources of finance. But due to the frictions and existence of information asymmetric behavior, it is always costlier for the firms to raise the external capital from the market, which in turn affect their investment sentiment and productivity. This kind of financial position of the firms puts heavy pressure on their productive activities. Keeping in view this theoretical background, the present study has tried to analyze the role of both external and internal financial factors (leverage, cash flow and liquidity) on the determination of total factor productivity of the firms of manufacturing industry and its sub-industries, maintaining a set of firm specific variables as control variables (size, age and disembodied technological intensity). An estimate of total factor productivity of the Indian manufacturing industry and sub-industries is computed using a semi-parametric approach, i.e., Levinsohn- Petrin method. It establishes the relationship between financial factors and productivity growth of 652 firms using a dynamic panel GMM method covering the time period between 1997-98 and 2012-13. From the econometric analyses, it has been found that the internal cash flow has a positive and significant impact on the productivity of overall manufacturing sector. The other financial factors like leverage and liquidity also play the significant role in the determination of total factor productivity of the Indian manufacturing sector. The significant role of internal cash flow on determination of firm-level productivity suggests that access to external finance is not available to Indian companies easily. Further, the negative impact of leverage on productivity could be due to the less developed bond market in India. These findings have certain implications for the policy makers to take various policy reforms to develop the external bond market and easily workout through which the financially constrained companies will be able to raise the financial capital in a cost-effective manner and would be able to influence their investments in the highly productive activities, which would help for the acceleration of economic growth.

Keywords: dynamic panel, financial factors, manufacturing sector, total factor productivity

Procedia PDF Downloads 313
818 Significant Growth in Expected Muslim Inbound Tourists in Japan Towards 2020 Tokyo Olympic and Still Incipient Stage of Current Halal Implementations in Hiroshima

Authors: Kyoko Monden

Abstract:

Tourism has moved to the forefront of national attention in Japan since September of 2013 when Tokyo won its bid to host the 2020 Summer Olympics. The number of foreign tourists has continued to break records, reaching 13.4 million in 2014, and is now expected to hit 20 million sooner than initially targeted 2020 due to government stimulus promotions; an increase in low cost carriers; the weakening of the Japanese yen, and strong economic growth in Asia. The tourism industry can be an effective trigger in Japan’s economic recovery as foreign tourists spent two trillion yen ($16.6 million) in Japan in 2014. In addition, 81% of them were all from Asian countries, and it is essential to know that 68.9% of the world’s Muslims, about a billion people, live in South and Southeast Asia. An important question is ‘Do Muslim tourists feel comfortable traveling in Japan?’ This research was initiated by an encounter with Muslim visitors in Hiroshima, a popular international tourist destination, who said they had found very few suitable restaurants in Hiroshima. The purpose of this research is to examine halal implementation in Hiroshima and suggest the next steps to be taken to improve current efforts. The goal will be to provide anyone, Muslims included, with first class hospitality in the near future in preparation for the massive influx of foreign tourists in 2020. The methods of this research were questionnaires, face-to-face interviews, phone interviews, and internet research. First, this research aims to address the significance of growing inbound tourism in Japan, especially the expected growth in Muslim tourists. Additionally, it should address the strong popularity of eating Japanese foods in Asian Muslim countries and as ranked no. 1 thing foreign tourists want to do in Japan. Secondly, the current incipient stage of Hiroshima’s halal implementation at hotels, restaurants, and major public places were exposed, and the existing action plans by Hiroshima Prefecture Government were presented. Furthermore, two surveys were conducted to clarify basic halal awareness of local residents in Hiroshima, and to gauge the inconveniences Muslims living in Hiroshima faced. Thirdly, the reasons for this lapse were observed and compared to the benchmarking data of other major tourist sites, Hiroshima’s halal implementation plans were proposed. The conclusion is, despite increasing demands and interests in halal-friendly businesses, overall halal actions have barely been applied in Hiroshima. 76% of Hiroshima residents had no idea what halal or halaal meant. It is essential to increase halal awareness and its importance to the economy and to launch further actions to make Muslim tourists feel welcome in Hiroshima and the entire country.

Keywords: halaal, halal implementation, Hiroshima, inbound tourists in Japan

Procedia PDF Downloads 201
817 Hospital Malnutrition and its Impact on 30-day Mortality in Hospitalized General Medicine Patients in a Tertiary Hospital in South India

Authors: Vineet Agrawal, Deepanjali S., Medha R., Subitha L.

Abstract:

Background. Hospital malnutrition is a highly prevalent issue and is known to increase the morbidity, mortality, length of hospital stay, and cost of care. In India, studies on hospital malnutrition have been restricted to ICU, post-surgical, and cancer patients. We designed this study to assess the impact of hospital malnutrition on 30-day post-discharge and in-hospital mortality in patients admitted in the general medicine department, irrespective of diagnosis. Methodology. All patients aged above 18 years admitted in the medicine wards, excluding medico-legal cases, were enrolled in the study. Nutritional assessment was done within 72 h of admission, using Subjective Global Assessment (SGA), which classifies patients into three categories: Severely malnourished, Mildly/moderately malnourished, and Normal/well-nourished. Anthropometric measurements like Body Mass Index (BMI), Triceps skin-fold thickness (TSF), and Mid-upper arm circumference (MUAC) were also performed. Patients were followed-up during hospital stay and 30 days after discharge through telephonic interview, and their final diagnosis, comorbidities, and cause of death were noted. Multivariate logistic regression and cox regression model were used to determine if the nutritional status at admission independently impacted mortality at one month. Results. The prevalence of malnourishment by SGA in our study was 67.3% among 395 hospitalized patients, of which 155 patients (39.2%) were moderately malnourished, and 111 (28.1%) were severely malnourished. Of 395 patients, 61 patients (15.4%) expired, of which 30 died in the hospital, and 31 died within 1 month of discharge from hospital. On univariate analysis, malnourished patients had significantly higher morality (24.3% in 111 Cat C patients) than well-nourished patients (10.1% in 129 Cat A patients), with OR 9.17, p-value 0.007. On multivariate logistic regression, age and higher Charlson Comorbidity Index (CCI) were independently associated with mortality. Higher CCI indicates higher burden of comorbidities on admission, and the CCI in the expired patient group (mean=4.38) was significantly higher than that of the alive cohort (mean=2.85). Though malnutrition significantly contributed to higher mortality on univariate analysis, it was not an independent predictor of outcome on multivariate logistic regression. Length of hospitalisation was also longer in the malnourished group (mean= 9.4 d) compared to the well-nourished group (mean= 8.03 d) with a trend towards significance (p=0.061). None of the anthropometric measurements like BMI, MUAC, or TSF showed any association with mortality or length of hospitalisation. Inference. The results of our study highlight the issue of hospital malnutrition in medicine wards and reiterate that malnutrition contributes significantly to patient outcomes. We found that SGA performs better than anthropometric measurements in assessing under-nutrition. We are of the opinion that the heterogeneity of the study population by diagnosis was probably the primary reason why malnutrition by SGA was not found to be an independent risk factor for mortality. Strategies to identify high-risk patients at admission and treat malnutrition in the hospital and post-discharge are needed.

Keywords: hospitalization outcome, length of hospital stay, mortality, malnutrition, subjective global assessment (SGA)

Procedia PDF Downloads 130
816 The Effect of Acute Consumption of a Nutritional Supplement Derived from Vegetable Extracts Rich in Nitrate on Athletic Performance

Authors: Giannis Arnaoutis, Dimitra Efthymiopoulou, Maria-Foivi Nikolopoulou, Yannis Manios

Abstract:

AIM: Nitrate-containing supplements have been used extensively as ergogenic in many sports. However, extract fractions from plant-based nutritional sources high in nitrate and their effect on athletic performance, has not been systematically investigated. The purpose of the present study was to examine the possible effect of acute consumption of a “smart mixture” from beetroot and rocket on exercise capacity. MATERIAL & METHODS: 12 healthy, nonsmoking, recreationally active, males (age: 25±4 years, % fat: 15.5±5.7, Fat Free Mass: 65.8±5.6 kg, VO2 max: 45.46.1 mL . kg -1 . min -1) participated in a double-blind, placebo-controlled trial study, in a randomized and counterbalanced order. Eligibility criteria for participation in this study included normal physical examination, and absence of any metabolic, cardiovascular, or renal disease. All participants completed a time to exhaustion cycling test at 75% of their maximum power output, twice. The subjects consumed either capsules containing 360 mg of nitrate in total or placebo capsules, in the morning, under fasted state. After 3h of passive recovery the performance test followed. Blood samples were collected upon arrival of the participants and 3 hours after the consumption of the corresponding capsules. Time until exhaustion, pre- and post-test lactate concentrations, and rate of perceived exertion for the same time points were assessed. RESULTS: Paired-sample t-test analysis found a significant difference in time to exhaustion between the trial with the nitrate consumption versus placebo [16.1±3.0 Vs 13.5±2.6 min, p=0.04] respectively. No significant differences were observed for the concentrations of lactic acid as well as for the values in the Borg scale between the two trials (p>0.05). CONCLUSIONS: Based on the results of the present study, it appears that a nutritional supplement derived from vegetable extracts rich in nitrate, improves athletic performance in recreationally active young males. However, the precise mechanism is not clear and future studies are needed. Acknowledgment: This research has been co‐financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code:T2EDK-00843).

Keywords: sports performance, ergogenic supplements, nitrate, extract fractions

Procedia PDF Downloads 48
815 Variability of the X-Ray Sun during Descending Period of Solar Cycle 23

Authors: Zavkiddin Mirtoshev, Mirabbos Mirkamalov

Abstract:

We have analyzed the time series of full disk integrated soft X-ray (SXR) and hard X-ray (HXR) emission from the solar corona during 2004 January 1 to 2009 December 31, covering the descending phase of solar cycle 23. We employed the daily X-ray index (DXI) derived from X-ray observations from the Solar X-ray Spectrometer (SOXS) mission in four different energy bands: 4-5.5; 5.5-7.5 keV (SXR) and 15-20; 20-25 keV (HXR). The application of Lomb-Scargle periodogram technique to the DXI time series observed by the Silicium detector in the energy bands reveals several short and intermediate periodicities of the X-ray corona. The DXI explicitly show the periods of 13.6 days, 26.7 days, 128.5 days, 151 days, 180 days, 220 days, 270 days, 1.24 year and 1.54 year periods in SXR as well as in HXR energy bands. Although all periods are above 70% confidence level in all energy bands, they show strong power in HXR emission in comparison to SXR emission. These periods are distinctly clear in three bands but somehow not unambiguously clear in 5.5-7.5 keV band. This might be due to the presence of Ferrum and Ferrum/Niccolum line features, which frequently vary with small scale flares like micro-flares. The regular 27-day rotation and 13.5 day period of sunspots from the invisible side of the Sun are found stronger in HXR band relative to SXR band. However, flare activity Rieger periods (150 and 180 days) and near Rieger period 220 days are very strong in HXR emission which is very much expected. On the other hand, our current study reveals strong 270 day periodicity in SXR emission which may be connected with tachocline, similar to a fundamental rotation period of the Sun. The 1.24 year and 1.54 year periodicities, represented from the present research work, are well observable in both SXR as well as in HXR channels. These long-term periodicities must also have connection with tachocline and should be regarded as a consequence of variation in rotational modulation over long time scales. The 1.24 year and 1.54 year periods are also found great importance and significance in the life formation and it evolution on the Earth, and therefore they also have great astro-biological importance. We gratefully acknowledge support by the Indian Centre for Space Science and Technology Education in Asia and the Pacific (CSSTEAP, the Centre is affiliated to the United Nations), Physical Research Laboratory (PRL) at Ahmedabad, India. This work has done under the supervision of Prof. Rajmal Jain and paper consist materials of pilot project and research part of the M. Tech program which was made during Space and Atmospheric Science Course.

Keywords: corona, flares, solar activity, X-ray emission

Procedia PDF Downloads 327
814 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data

Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour

Abstract:

Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.

Keywords: physical activity, machine learning, under 5s, disability, accelerometer

Procedia PDF Downloads 186
813 Demographic Shrinkage and Reshaping Regional Policy of Lithuania in Economic Geographic Context

Authors: Eduardas Spiriajevas

Abstract:

Since the end of the 20th century, when Lithuania regained its independence, a process of demographic shrinkage started. Recently, it affects the efficiency of implementation of actions related to regional development policy and geographic scopes of created value added in the regions. The demographic structures of human resources reflect onto the regions and their economic geographic environment. Due to reshaping economies and state reforms on restructuration of economic branches such as agriculture and industry, it affects the economic significance of services’ sector. These processes influence the competitiveness of labor market and its demographic characteristics. Such vivid consequences are appropriate for the structures of human migrations, which affected the processes of demographic ageing of human resources in the regions, especially in peripheral ones. These phenomena of modern times induce the demographic shrinkage of society and its economic geographic characteristics in the actions of regional development and in regional policy. The internal and external migrations of population captured numerous regional economic disparities, and influenced on territorial density and concentration of population of the country and created the economies of spatial unevenness in such small geographically compact country as Lithuania. The processes of territorial reshaping of distribution of population create new regions and their economic environment, which is not corresponding to the main principles of regional policy and its power to create the well-being and to promote the attractiveness for economic development. These are the new challenges of national regional policy and it should be researched in a systematic way of taking into consideration the analytical approaches of regional economy in the context of economic geographic research methods. A comparative territorial analysis according to administrative division of Lithuania in relation to retrospective approach and introduction of method of location quotients, both give the results of economic geographic character with cartographic representations using the tools of spatial analysis provided by technologies of Geographic Information Systems. A set of these research methods provide the new spatially evidenced based results, which must be taken into consideration in reshaping of national regional policy in economic geographic context. Due to demographic shrinkage and increasing differentiation of economic developments within the regions, an input of economic geographic dimension is inevitable. In order to sustain territorial balanced economic development, there is a need to strengthen the roles of regional centers (towns) and to empower them with new economic functionalities for revitalization of peripheral regions, and to increase their economic competitiveness and social capacities on national scale.

Keywords: demographic shrinkage, economic geography, Lithuania, regions

Procedia PDF Downloads 142
812 A Sustainability Benchmarking Framework Based on the Life Cycle Sustainability Assessment: The Case of the Italian Ceramic District

Authors: A. M. Ferrari, L. Volpi, M. Pini, C. Siligardi, F. E. Garcia Muina, D. Settembre Blundo

Abstract:

A long tradition in the ceramic manufacturing since the 18th century, primarily due to the availability of raw materials and an efficient transport system, let to the birth and development of the Italian ceramic tiles district that nowadays represents a reference point for this sector even at global level. This economic growth has been coupled to attention towards environmental sustainability issues throughout various initiatives undertaken over the years at the level of the production sector, such as certification activities and sustainability policies. In this way, starting from an evaluation of the sustainability in all its aspects, the present work aims to develop a benchmarking helping both producers and consumers. In the present study, throughout the Life Cycle Sustainability Assessment (LCSA) framework, the sustainability has been assessed in all its dimensions: environmental with the Life Cycle Assessment (LCA), economic with the Life Cycle Costing (LCC) and social with the Social Life Cycle Assessment (S-LCA). The annual district production of stoneware tiles during the 2016 reference year has been taken as reference flow for all the three assessments, and the system boundaries cover the entire life cycle of the tiles, except for the LCC for which only the production costs have been considered at the moment. In addition, a preliminary method for the evaluation of local and indoor emissions has been introduced in order to assess the impact due to atmospheric emissions on both people living in the area surrounding the factories and workers. The Life Cycle Assessment results, obtained from IMPACT 2002+ modified assessment method, highlight that the manufacturing process is responsible for the main impact, especially because of atmospheric emissions at a local scale, followed by the distribution to end users, the installation and the ordinary maintenance of the tiles. With regard to the economic evaluation, both the internal and external costs have been considered. For the LCC, primary data from the analysis of the financial statements of Italian ceramic companies show that the higher cost items refer to expenses for goods and services and costs of human resources. The analysis of externalities with the EPS 2015dx method attributes the main damages to the distribution and installation of the tiles. The social dimension has been investigated with a preliminary approach by using the Social Hotspots Database, and the results indicate that the most affected damage categories are health and safety and labor rights and decent work. This study shows the potential of the LCSA framework applied to an industrial sector; in particular, it can be a useful tool for building a comprehensive benchmark for the sustainability of the ceramic industry, and it can help companies to actively integrate sustainability principles into their business models.

Keywords: benchmarking, Italian ceramic industry, life cycle sustainability assessment, porcelain stoneware tiles

Procedia PDF Downloads 104
811 Continuous Glucose Monitoring Systems and the Improvement in Hypoglycemic Awareness Post-Islet Transplantation: A Single-Centre Cohort Study

Authors: Clare Flood, Shareen Forbes

Abstract:

Background: Type 1 diabetes mellitus (T1DM) is an autoimmune disorder affecting >400,000 people in the UK alone, with the global prevalence expected to double in the next decade. Islet transplant offers a minimally-invasive procedure with very low morbidity and almost no mortality, and is now as effective as whole pancreas transplant. The procedure was introduced to the UK in 2011 for patients with the most severe type 1 diabetes mellitus (T1DM) – those with unstable blood glucose, frequently occurring episodes of severe hypoglycemia and impaired awareness of hypoglycemia (IAH). Objectives: To evaluate the effectiveness of islet transplantation in improving glycemic control, reducing the burden of hypoglycemia and improving awareness of hypoglycemia through a single-centre cohort study at the Royal Infirmary of Edinburgh. Glycemic control and degree of hypoglycemic awareness will be determined and monitored pre- and post-transplantation to determine effectiveness of the procedure. Methods: A retrospective analysis of data collected over three years from the 16 patients who have undergone islet transplantation in Scotland. Glycated haemoglobin (HbA1c) was measured and continuous glucose monitoring systems (CGMS) were utilised to assess glycemic control, while Gold and Clarke score questionnaires tested IAH. Results: All patients had improved glycemic control following transplant, with optimal control seen visually at 3 months post-transplant. Glycemic control significantly improved, as illustrated by percentage time in hypoglycemia in the months following transplant (p=0.0211) and HbA1c (p=0.0426). Improved Clarke (p=0.0034) and Gold (p=0.0001) scores indicate improved glycemic awareness following transplant. Conclusion: While the small sample of islet transplant recipients at the Royal Infirmary of Edinburgh prevents definitive conclusions being drawn, it is indicated that through our retrospective, single-centre cohort study of 16 patients, islet transplant is capable of improving glycemic control, reducing the burden of hypoglycemia and IAH post-transplant. Data can be combined with similar trials at other centres to increase statistical power but from research in Edinburgh, it can be suggested that the minimally invasive procedure of islet transplantation offers selected patients with extremely unstable T1DM the incredible opportunity to regain control of their condition and improve their quality of life.

Keywords: diabetes, islet, transplant, CGMS

Procedia PDF Downloads 254
810 Improving Ghana's Oil Industry Through Integrated Operations

Authors: Esther Simpson, Evans Addo Tetteh

Abstract:

One of the most important sectors in Ghana’s economy is the oil and gas sector. Effective supply chain management is required to ensure the timely delivery of these products to the end users, given the rise in nationwide demand for petroleum products. Contrarily, freight forwarding plays a crucial role in facilitating intra- and intra-country trade, particularly the movement of oil goods. Nevertheless, there has not been enough scientific study done on how marketing, supply chain management, and freight forwarding are integrated in the oil business. By highlighting possible areas for development in the supply chain management of petroleum products, this article seeks to close this gap. The study was predominantly qualitative and featured semi-structured interviews with influential figures in the oil and gas sector, such as marketers, distributors, freight forwarders, and regulatory organizations. The purpose of the interviews was to determine the difficulties and possibilities for enhancing the management of the petroleum products supply chain. Thematic analysis was used to examine the data obtained in order to find patterns and themes that arose. The findings from the study revealed that the oil sector faced a number of issues in terms of supply chain management. Inadequate infrastructure, insufficient storage facilities, a lack of cooperation among parties, and an inadequate regulatory framework were among the obstacles. Furthermore, the study indicated significant prospects for enhancing petroleum product supply chain management, such as the integration of more advanced digital technologies, the formation of strategic alliances, and the adoption of sustainable practices in petroleum product supply chain management. The study's conclusions have far-reaching ramifications for the oil and gas sector, freight forwarding, and Ghana’s economy as a whole. Marketing, supply chain management, and freight forwarding has high prospects from being integrated to improve the efficiency of the petroleum product supply chain, resulting in considerable cost savings for the industry. Furthermore, the use of sustainable practices will improve the industry's sustainability and lessen the environmental effect of the petroleum product supply chain. Based on the findings, we propose that stakeholders in Ghana’s oil and gas sector work together and collaborate to enhance petroleum supply chain management. This collaboration should include the use of digital technologies, the formation of strategic alliances, and the implementation of sustainable practices. Moreover, we urge that governments establish suitable rules to guarantee the efficient and sustainable management of petroleum product supply chains. In conclusion, the integration and combination of marketing, supply chain management, and freight forwarding in the oil business gives a tremendous opportunity for enhancing petroleum product supply chain management. The study's conclusions have far-reaching ramifications for the sector, freight forwarding, and the economy as a whole. Using sustainable practices, integrating digital technology, and forming strategic alliances will improve the efficiency and sustainability of the petroleum product supply chain. We expect that this conference paper will encourage more study and collaboration among oil and gas sector stakeholders to improve petroleum supply chain management.

Keywords: collaboration, logistics, sustainability, supply chain management

Procedia PDF Downloads 63
809 Functional Analysis of Variants Implicated in Hearing Loss in a Cohort from Argentina: From Molecular Diagnosis to Pre-Clinical Research

Authors: Paula I. Buonfiglio, Carlos David Bruque, Lucia Salatino, Vanesa Lotersztein, Sebastián Menazzi, Paola Plazas, Ana Belén Elgoyhen, Viviana Dalamón

Abstract:

Hearing loss (HL) is the most prevalent sensorineural disorder affecting about 10% of the global population, with more than half due to genetic causes. About 1 in 500-1000 newborns present congenital HL. Most of the patients are non-syndromic with an autosomal recessive mode of inheritance. To date, more than 100 genes are related to HL. Therefore, the Whole-exome sequencing (WES) technique has become a cost-effective alternative approach for molecular diagnosis. Nevertheless, new challenges arise from the detection of novel variants, in particular missense changes, which can lead to a spectrum of genotype-to-phenotype correlations, which is not always straightforward. In this work, we aimed to identify the genetic causes of HL in isolated and familial cases by designing a multistep approach to analyze target genes related to hearing impairment. Moreover, we performed in silico and in vivo analyses in order to further study the effect of some of the novel variants identified in the hair cell function using the zebrafish model. A total of 650 patients were studied by Sanger Sequencing and Gap-PCR in GJB2 and GJB6 genes, respectively, diagnosing 15.5% of sporadic cases and 36% of familial ones. Overall, 50 different sequence variants were detected. Fifty of the undiagnosed patients with moderate HL were tested for deletions in STRC gene by Multiplex ligation-dependent probe amplification technique (MLPA), leading to 6% of diagnosis. After this initial screening, 50 families were selected to be analyzed by WES, achieving diagnosis in 44% of them. Half of the identified variants were novel. A missense variant in MYO6 gene detected in a family with postlingual HL was selected to be further analyzed. A protein modeling with AlphaFold2 software was performed, proving its pathogenic effect. In order to functionally validate this novel variant, a knockdown phenotype rescue assay in zebrafish was carried out. Injection of wild-type MYO6 mRNA in embryos rescued the phenotype, whereas using the mutant MYO6 mRNA (carrying c.2782C>A variant) had no effect. These results strongly suggest the deleterious effect of this variant on the mobility of stereocilia in zebrafish neuromasts, and hence on the auditory system. In the present work, we demonstrated that our algorithm is suitable for the sequential multigenic approach to HL in our cohort. These results highlight the importance of a combined strategy in order to identify candidate variants as well as the in silico and in vivo studies to analyze and prove their pathogenicity and accomplish a better understanding of the mechanisms underlying the physiopathology of the hearing impairment.

Keywords: diagnosis, genetics, hearing loss, in silico analysis, in vivo analysis, WES, zebrafish

Procedia PDF Downloads 71
808 Application of 2D Electrical Resistivity Tomographic Imaging Technique to Study Climate Induced Landslide and Slope Stability through the Analysis of Factor of Safety: A Case Study in Ooty Area, Tamil Nadu, India

Authors: S. Maniruzzaman, N. Ramanujam, Qazi Akhter Rasool, Swapan Kumar Biswas, P. Prasad, Chandrakanta Ojha

Abstract:

Landslide is one of the major natural disasters in South Asian countries. Applying 2D Electrical Resistivity Tomographic Imaging estimation of geometry, thickness, and depth of failure zone of the landslide can be made. Landslide is a pertinent problem in Nilgris plateau next to Himalaya. Nilgris range consists of hard Archean metamorphic rocks. Intense weathering prevailed during the Pre-Cambrian time had deformed the rocks up to 45m depth. The landslides are dominant in the southern and eastern part of plateau of is comparatively smaller than the northern drainage basins, as it has low density of drainage; coarse texture permitted the more of infiltration of rainwater, whereas in the northern part of the plateau entombed with high density of drainage pattern and fine texture with less infiltration than run off, and low to the susceptible to landslide. To get comprehensive information about the landslide zone 2D Electrical Resistivity Tomographic imaging study with CRM 500 Resistivity meter are used in Coonoor– Mettupalyam sector of Nilgiris plateau. To calculate Factor of Safety the infinite slope model of Brunsden and Prior is used. Factor of Safety can be expressed (FS) as the ratio of resisting forces to disturbing forces. If FS < 1 disturbing forces are larger than resisting forces and failure may occur. The geotechnical parameters of soil samples are calculated on the basis upon the apparent resistivity values for litho units of measured from 2D ERT image of the landslide zone. Relationship between friction angles for various soil properties is established by simple regression analysis from apparent resistivity data. Increase of water content in slide zone reduces the effectiveness of the shearing resistance and increase the sliding movement. Time-lapse resistivity changes to slope failure is determined through geophysical Factor of Safety which depends on resistivity and site topography. This ERT technique infers soil property at variable depths in wider areas. This approach to retrieve the soil property and overcomes the limit of the point of information provided by rain gauges and porous probes. Monitoring of slope stability without altering soil structure through the ERT technique is non-invasive with low cost. In landslide prone area an automated Electrical Resistivity Tomographic Imaging system should be installed permanently with electrode networks to monitor the hydraulic precursors to monitor landslide movement.

Keywords: 2D ERT, landslide, safety factor, slope stability

Procedia PDF Downloads 291
807 Preparation and Characterization of Poly(L-Lactic Acid)/Oligo(D-Lactic Acid) Grafted Cellulose Composites

Authors: Md. Hafezur Rahaman, Mohd. Maniruzzaman, Md. Shadiqul Islam, Md. Masud Rana

Abstract:

With the growth of environmental awareness, enormous researches are running to develop the next generation materials based on sustainability, eco-competence, and green chemistry to preserve and protect the environment. Due to biodegradability and biocompatibility, poly (L-lactic acid) (PLLA) has a great interest in ecological and medical applications. Also, cellulose is one of the most abundant biodegradable, renewable polymers found in nature. It has several advantages such as low cost, high mechanical strength, biodegradability and so on. Recently, an immense deal of attention has been paid for the scientific and technological development of α-cellulose based composite material. PLLA could be used for grafting of cellulose to improve the compatibility prior to the composite preparation. Here it is quite difficult to form a bond between lower hydrophilic molecules like PLLA and α-cellulose. Dimmers and oligomers can easily be grafted onto the surface of the cellulose by ring opening or polycondensation method due to their low molecular weight. In this research, α-cellulose extracted from jute fiber is grafted with oligo(D-lactic acid) (ODLA) via graft polycondensation reaction in presence of para-toluene sulphonic acid and potassium persulphate in toluene at 130°C for 9 hours under 380 mmHg. Here ODLA is synthesized by ring opening polymerization of D-lactides in the presence of stannous octoate (0.03 wt% of lactide) and D-lactic acids at 140°C for 10 hours. Composites of PLLA with ODLA grafted α-cellulose are prepared by solution mixing and film casting method. Confirmation of grafting was carried out through FTIR spectroscopy and SEM analysis. A strongest carbonyl peak of FTIR spectroscopy at 1728 cm⁻¹ of ODLA grafted α-cellulose confirms the grafting of ODLA onto α-cellulose which is absent in α-cellulose. It is also observed from SEM photographs that there are some white areas (spot) on ODLA grafted α-cellulose as compared to α-cellulose may indicate the grafting of ODLA and consistent with FTIR results. Analysis of the composites is carried out by FTIR, SEM, WAXD and thermal gravimetric analyzer. Most of the FTIR characteristic absorption peak of the composites shifted to higher wave number with increasing peak area may provide a confirmation that PLLA and grafted cellulose have better compatibility in composites via intermolecular hydrogen bonding and this supports previously published results. Grafted α-cellulose distributions in composites are uniform which is observed by SEM analysis. WAXD studied show that only homo-crystalline structures of PLLA present in the composites. Thermal stability of the composites is enhanced with increasing the percentages of ODLA grafted α-cellulose in composites. As a consequence, the resultant composites have a resistance toward the thermal degradation. The effects of length of the grafted chain and biodegradability of the composites will be studied in further research.

Keywords: α-cellulose, composite, graft polycondensation, oligo(D-lactic acid), poly(L-lactic acid)

Procedia PDF Downloads 98
806 A Review of Atomization Mechanisms Used for Spray Flash Evaporation: Their Effectiveness and Proposal of Rotary Bell Atomizer for Flashing Application

Authors: Murad A. Channa, Mehdi Khiadani. Yasir Al-Abdeli

Abstract:

Considering the severity of water scarcity around the world and its widening at an alarming rate, practical improvements in desalination techniques need to be engineered at the earliest. Atomization is the major aspect of flashing phenomena, yet it has been paid less attention to until now. There is a need to test efficient ways of atomization for the flashing process. Flash evaporation together with reverse osmosis is also a commercially matured desalination technique commonly famous as Multi-stage Flash (MSF). Even though reverse osmosis is massively practical, it is not economical or sustainable compared to flash evaporation. However, flashing evaporation has its drawbacks as well such as lower efficiency of water production per higher consumption of power and time. Flash evaporation is simply the instant boiling of a subcooled liquid which is introduced as droplets in a well-maintained negative environment. This negative pressure inside the vacuum increases the temperature of the liquid droplets far above their boiling point, which results in the release of latent heat, and the liquid droplets turn into vapor which is collected to be condensed back into an impurity-free liquid in a condenser. Atomization is the main difference between pool and spray flash evaporation. Atomization is the heart of the flash evaporation process as it increases the evaporating surface area per drop atomized. Atomization can be categorized into many levels depending on its drop size, which again becomes crucial for increasing the droplet density (drop count) per given flow rate. This review comprehensively summarizes the selective results relating to the methods of atomization and their effectiveness on the evaporation rate from earlier works to date. In addition, the reviewers propose using centrifugal atomization for the flashing application, which brings several advantages viz ultra-fine droplets, uniform droplet density, and the swirling geometry of the spray with kinetically more energetic sprays during their flight. Finally, several challenges of using rotary bell atomizer (RBA) and RBA Sprays inside the chamber have been identified which will be explored in detail. A schematic of rotary bell atomizer (RBA) integration with the chamber has been designed. This powerful centrifugal atomization has the potential to increase potable water production in commercial multi-stage flash evaporators, where it would be preferably advantageous.

Keywords: atomization, desalination, flash evaporation, rotary bell atomizer

Procedia PDF Downloads 59
805 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 122
804 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 108
803 Temporal Profile of Exercise-Induced Changes in Plasma Brain-Derived Neurotrophic Factor Levels of Schizophrenic Individuals

Authors: Caroline Lavratti, Pedro Dal Lago, Gustavo Reinaldo, Gilson Dorneles, Andreia Bard, Laira Fuhr, Daniela Pochmann, Alessandra Peres, Luciane Wagner, Viviane Elsner

Abstract:

Approximately 1% of the world's population is affected by schizophrenia (SZ), a chronic and debilitating neurodevelopmental disorder. Among possible factors, reduced levels of Brain-derived neurotrophic factor (BDNF) has been recognized in physiopathogenesis and course of SZ. In this context, peripheral BDNF levels have been used as a biomarker in several clinical studies, since this neurotrophin is able to cross the blood-brain barrier in a bi-directional manner and seems to present a strong correlation with the central nervous system fluid levels. The patients with SZ usually adopts a sedentary lifestyle, which has been partly associated with the increase in obesity incidence rates, metabolic syndrome, type 2 diabetes and coronary heart disease. On the other hand, exercise, a non-invasive and low cost intervention, has been considered an important additional therapeutic option for this population, promoting benefits to physical and mental health. To our knowledge, few studies have been pointed out that the positive effects of exercise in SZ patients are mediated, at least in part, to enhanced levels of BDNF after training. However, these studies are focused on evaluating the effect of single bouts of exercise of chronic interventions, data concerning the short- and long-term exercise outcomes on BDNF are scarce. Therefore, this study aimed to evaluate the effect of a concurrent exercise protocol (CEP) on plasma BDNF levels of SZ patients in different time-points. Material and Methods: This study was approved by the Research Ethics Committee of the Centro Universitário Metodista do IPA (no 1.243.680/2015). The participants (n=15) were subbmited to the CEP during 90 days, 3 times a week for 60 minutes each session. In order to evaluate the short and long-term effects of exercise, blood samples were collected pre, 30, 60 and 90 days after the intervention began. Plasma BDNF levels were determined with the ELISA method, from Sigma-Aldrich commercial kit (catalog number RAB0026) according to manufacturer's instructions. Results: A remarkable increase on plasma BDNF levels at 90 days after training compared to baseline (p=0.006) and 30 days (p=0.007) values were observed. Conclusion: Our data are in agreement with several studies that show significant enhancement on BDNF levels in response to different exercise protocols in SZ individuals. We might suggest that BDNF upregulation after training in SZ patients acts in a dose-dependent manner, being more pronounced in response to chronic exposure. Acknowledgments: This work was supported by Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul (FAPERGS)/Brazil.

Keywords: exercise, BDNF, schizophrenia, time-points

Procedia PDF Downloads 234
802 Theoretical Framework and Empirical Simulation of Policy Design on Trans-Dimensional Resource Recycling

Authors: Yufeng Wu, Yifan Gu, Bin Li, Wei Wang

Abstract:

Resource recycling process contains a subsystem with interactions of three dimensions including coupling allocation of primary and secondary resources, responsibility coordination of stakeholders in forward and reverse supply chains, and trans-boundary transfer of hidden resource and environmental responsibilities between regions. Overlap or lack of responsibilities is easy to appear at the intersection of the three management dimensions. It is urgent to make an overall design of the policy system for recycling resources. From theoretical perspective, this paper analyzes the unique external differences of resource and environment in various dimensions and explores the reason why the effects of trans-dimensional policies are strongly correlated. Taking the example of the copper resources contained in the waste electrical and electronic equipment, this paper constructs reduction effect accounting model of resources recycling and set four trans-dimensional policy scenarios including resources tax and environmental tax reform of the raw and secondary resources, application of extended producer responsibility system, promotion of clean development mechanism, and strict entry barriers of imported wastes. In these ways, the paper simulates the impact effect of resources recycling process on resource deduction and emission reduction of waste water and gas, and constructs trans-dimensional policy mix scenario through integrating dominant strategy. The results show that combined application of various dimensional policies can achieve incentive compatibility and the trans-dimensional policy mix scenario can reach a better effect. Compared with baseline scenario, this scenario will increase 91.06% copper resources reduction effect and improve emission reduction of waste water and gas by eight times from 2010 to 2030. This paper further analyzes the development orientation of policies in various dimension. In resource dimension, the combined application of compulsory, market and authentication methods should be promoted to improve the use ratio of secondary resources. In supply chain dimension, resource value, residual functional value and potential information value contained in waste products should be fully excavated to construct a circular business system. In regional dimension, it should give full play to the comparative advantages of manufacturing power to improve China’s voice in resource recycling in the world.

Keywords: resource recycling, trans-dimension, policy design, incentive compatibility, life cycle

Procedia PDF Downloads 104
801 Critical Mathematics Education and School Education in India: A Study of the National Curriculum Framework 2022 for Foundational Stage

Authors: Eish Sharma

Abstract:

Literature around Mathematics education suggests that democratic attitudes can be strengthened through teaching and learning Mathematics. Furthermore, connections between critical education and Mathematics education are observed in the light of critical pedagogy to locate Critical Mathematics Education (CME) as the theoretical framework. Critical pedagogy applied to Mathematics education is identified as one of the key themes subsumed under Critical Mathematics Education. Through the application of critical pedagogy in mathematics, unequal power relations and social injustice can be identified, analyzed, and challenged. The research question is: have educational policies in India viewed the role of critical pedagogy applied to mathematics education (i.e., critical mathematics education) to ensure social justice as an educational aim? The National Curriculum Framework (NCF), 2005 upholds education for democracy and the role of mathematics education in facilitating the same. More than this, NCF 2005 rests on Critical Pedagogy Framework and it recommends that critical pedagogy must be practiced in all dimensions of school education. NCF 2005 visualizes critical pedagogy for social sciences as well as sciences, stating that the science curriculum, including mathematics, must be used as an “instrument for achieving social change to reduce the divide based on economic class, gender, caste, religion, and the region”. Furthermore, the implementation of NCF 2005 led to a reform in the syllabus and textbooks in school mathematics at the national level, and critical pedagogy was applied to mathematics textbooks at the primary level. This intervention led to ethnomathematics and critical mathematics education in the school curriculum in India for the first time at the national level. In October 2022, the Ministry of Education launched the National Curriculum Framework for Foundational Stage (NCF-FS), developed in light of the National Education Policy, 2020, for children in the three to eight years age group. I want to find out whether critical pedagogy-based education and critical pedagogy-based mathematics education are carried forward in NCF 2022. To find this, an argument analysis of specific sections of the National Curriculum Framework 2022 document needs to be executed. Des Gasper suggests two tables: The first table contains four columns, namely, text component, comments on meanings, possible reformulation of the same text, and identified conclusions and assumptions (both stated and unstated). This table is for understanding the components and meanings of the text and is based on Scriven’s model for understanding the components and meanings of words in the text. The second table contains four columns i.e., claim identified, given data, warrant, and stated qualifier/rebuttal. This table is for describing the structure of the argument, how and how well the components fit together and is called ‘George Table diagram based on Toulmin-Bunn Model’.

Keywords: critical mathematics education, critical pedagogy, social justice, etnomathematics

Procedia PDF Downloads 60