Search results for: multiple input multiple output (MIMO)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7868

Search results for: multiple input multiple output (MIMO)

368 Internationalization of Higher Education in Malaysia-Rationale for Global Citizens

Authors: Irma Wani Othman

Abstract:

The internationalization of higher education in Malaysia mainly focuses to place the implementation of the strategic, comprehensive and integrated range of stakeholders in order to highlight the visibility of Malaysia as a hub of academic excellence. While the concept of 'global citizenship' is used as a two-pronged strategy of aggressive marketing by universities which includes; (i) the involvement of the academic expatriates in stimulating international activities of higher education and (ii) an increase in international student enrollment capacity for the enculturation of science and the development of first class mentality. In this aspect, aspirations for a transnational social movement through global citizenship status to establish the identity of the university community without borders (borderless universities) - regardless of skin colour, thus rationalize and liberalize the universal principles of life and cultural traditions of a nation. The education system earlier referred by the spirit of nationalism is now progressing due to globalization, hence forming a system of higher education that is relevant and generated by the need of all time. However, debates arose when the involvement of global citizenship is said to threaten the ultimate university autonomy in determining the direction of academic affairs and governance of their human resources. Stemming from this debate, this study aims to explore the experience of 'global citizenship' that the academic expatriates and international students in shaping the university's strategic needs and interests which are in line with the transition of contemporary higher education. The objective of this study is to examine the acculturation experience of the global citizen in the form of transnational higher education system and suggest policy and policing IHE which refers directly to the experience of the global citizen. This study offers a detailed understanding of how the university communities assess their expatriation experience, thus becoming useful information for learning and transforming education. The findings also open an advanced perspective on the international mobility of human resources and the implications on the implementation of the policy of internationalization of higher education. The contribution of this study is expected to give new input, thus shift the focus of contextual literature for the internationalization of the education system. Instead of focusing on the purpose of generating income of a university, to a greater understanding of subjective experience in utilizing international human resources hence contributing to the prominent transnational character of higher education.

Keywords: internationalization, global citizens, Malaysia higher education, academic expatriate, international students

Procedia PDF Downloads 270
367 Electronic Six-Minute Walk Test (E-6MWT): Less Manpower, Higher Efficiency, and Better Data Management

Authors: C. M. Choi, H. C. Tsang, W. K. Fong, Y. K. Cheng, T. K. Chui, L. Y. Chan, K. W. Lee, C. K. Yuen, P. W. Lau, Y. L. To, K. C. Chow

Abstract:

Six-minute walk test (6MWT) is a sub-maximal exercise test to assess aerobic capacity and exercise tolerance of patients with chronic respiratory disease and heart failure. This has been proven to be a reliable and valid tool and commonly used in clinical situations. Traditional 6MWT is labour-intensive and time-consuming especially for patients who require assistance in ambulation and oxygen use. When performing the test with these patients, one staff will assist the patient in walking (with or without aids) while another staff will need to manually record patient’s oxygen saturation, heart rate and walking distance at every minute and/or carry oxygen cylinder at the same time. Physiotherapist will then have to document the test results in bed notes in details. With the use of electronic 6MWT (E-6MWT), patients wear a wireless oximeter that transfers data to a tablet PC via Bluetooth. Real-time recording of oxygen saturation, heart rate, and distance are displayed. No manual work on recording is needed. The tablet will generate a comprehensive report which can be directly attached to the patient’s bed notes for documentation. Data can also be saved for later patient follow up. This study was carried out in North District Hospital. Patients who followed commands and required 6MWT assessment were included. Patients were assigned to study or control groups. In the study group, patients adopted the E-6MWT while those in control group adopted the traditional 6MWT. Manpower and time consumed were recorded. Physiotherapists also completed a questionnaire about the use of E-6MWT. Total 12 subjects (Study=6; Control=6) were recruited during 11-12/2017. An average number of staff required and time consumed in traditional 6MWT were 1.67 and 949.33 seconds respectively; while in E-6MWT, the figures were 1.00 and 630.00 seconds respectively. Compared to traditional 6MWT, E-6MWT required 67.00% less manpower and 50.10% less in time spent. Physiotherapists (n=7) found E-6MWT is convenient to use (mean=5.14; satisfied to very satisfied), requires less manpower and time to complete the test (mean=4.71; rather satisfied to satisfied), has better data management (mean=5.86; satisfied to very satisfied) and is recommended to be used clinically (mean=5.29; satisfied to very satisfied). It is proven that E-6MWT requires less manpower input with higher efficiency and better data management. It is welcomed by the clinical frontline staff.

Keywords: electronic, physiotherapy, six-minute walk test, 6MWT

Procedia PDF Downloads 125
366 Experimental Evaluation of Foundation Settlement Mitigations in Liquefiable Soils using Press-in Sheet Piling Technique: 1-g Shake Table Tests

Authors: Md. Kausar Alam, Ramin Motamed

Abstract:

The damaging effects of liquefaction-induced ground movements have been frequently observed in past earthquakes, such as the 2010-2011 Canterbury Earthquake Sequence (CES) in New Zealand and the 2011 Tohoku earthquake in Japan. To reduce the consequences of soil liquefaction at shallow depths, various ground improvement techniques have been utilized in engineering practice, among which this research is focused on experimentally evaluating the press-in sheet piling technique. The press-in sheet pile technique eliminates the vibration, hammering, and noise pollution associated with dynamic sheet pile installation methods. Unfortunately, there are limited experimental studies on the press-in sheet piling technique for liquefaction mitigation using 1g shake table tests in which all the controlling mechanisms of liquefaction-induced foundation settlement, including sand ejecta, can be realistically reproduced. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada, Reno, to evaluate the performance of this technique in liquefiable soil layers. First, a 1/5 size model was developed based on a recent UC San Diego shaking table experiment. The scaled model has a density of 50% for the top crust, 40% for the intermediate liquefiable layer, and 85% for the bottom dense layer. Second, a shallow foundation is seated atop an unsaturated sandy soil crust. Third, in a series of tests, a sheet pile with variable embedment depth is inserted into the liquefiable soil using the press-in technique surrounding the shallow foundations. The scaled models are subjected to harmonic input motions with amplitude and dominant frequency properly scaled based on the large-scale shake table test. This study assesses the performance of the press-in sheet piling technique in terms of reductions in the foundation movements (settlement and tilt) and generated excess pore water pressures. In addition, this paper discusses the cost-effectiveness and carbon footprint features of the studied mitigation measures.

Keywords: excess pore water pressure, foundation settlement, press-in sheet pile, soil liquefaction

Procedia PDF Downloads 77
365 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential

Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen

Abstract:

Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.

Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance

Procedia PDF Downloads 363
364 Common Caper (Capparis Spinosa L.) From Oblivion and Neglect to the Interface of Medicinal Plants

Authors: Ahmad Alsheikh Kaddour

Abstract:

Herbal medicine has been a long-standing phenomenon in Arab countries since ancient times because of its breadth and moderate temperament. Therefore, it possesses a vast natural and economic wealth of medicinal and aromatic herbs. This prompted ancient Egyptians and Arabs to discover and exploit them. The economic importance of the plant is not only from medicinal uses; it is a plant of high economic value for its various uses, especially in food, cosmetic and aromatic industries. It is also an ornamental plant and soil stabilization. The main objective of this research is to study the chemical changes that occur in the plant during the growth period, as well as the production of plant buds, which were previously considered unwanted plants. The research was carried out in the period 2021-2022 in the valley of Al-Shaflah (common caper), located in Qumhana village, 7 km north of Hama Governorate, Syria. The results of the research showed a change in the percentage of chemical components in the plant parts. The ratio of protein content and the percentage of fatty substances in fruits and the ratio of oil in the seeds until the period of harvesting of these plant parts improved, but the percentage of essential oils decreased with the progress of the plant growth, while the Glycosides content where improved with the plant aging. The production of buds is small, with dimensions as 0.5×0.5 cm, which is preferred for commercial markets, harvested every 2-3 days in quantities ranging from 0.4 to 0.5 kg in one cut/shrubs with 3 years’ age as average for the years 2021-2022. The monthly production of a shrub is between 4-5 kg per month. The productive period is 4 months approximately. This means that the seasonal production of one plant is 16-20 kg and the production of 16-20 tons per year with a plant density of 1,000 shrubs per hectare, which is the optimum rate of cultivation in the unit of mass, given the price of a kg of these buds is equivalent to 1 US $; however, this means that the annual output value of the locally produced hectare ranges from 16,000 US $ to 20,000 US $ for farmers. The results showed that it is possible to transform the cultivation of this plant from traditional random to typical areas cultivation, with a plant density of 1,000-1,100 plants per hectare according to the type of soil to obtain production of medicinal and nutritious buds, as well as, the need to pay attention to this national wealth and invest in the optimal manner, which leads to the acquisition of hard currency through export to support the national income.

Keywords: common caper, medicinal plants, propagation, medical, economic importance

Procedia PDF Downloads 47
363 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business

Authors: Kritchakhris Na-Wattanaprasert

Abstract:

The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.

Keywords: key performance indicator, warehouse management, warehouse operation, logistics management

Procedia PDF Downloads 414
362 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India

Authors: Disha Bhanot, Vinish Kathuria

Abstract:

This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.

Keywords: distress sale, horticulture, income loss, India, price uncertainity

Procedia PDF Downloads 216
361 Global Modeling of Drill String Dragging and Buckling in 3D Curvilinear Bore-Holes

Authors: Valery Gulyayev, Sergey Glazunov, Elena Andrusenko, Nataliya Shlyun

Abstract:

Enhancement of technology and techniques for drilling deep directed oil and gas bore-wells are of essential industrial significance because these wells make it possible to increase their productivity and output. Generally, they are used for drilling in hard and shale formations, that is why their drivage processes are followed by the emergency and failure effects. As is corroborated by practice, the principal drilling drawback occurring in drivage of long curvilinear bore-wells is conditioned by the need to obviate essential force hindrances caused by simultaneous action of the gravity, contact and friction forces. Primarily, these forces depend on the type of the technological regime, drill string stiffness, bore-hole tortuosity and its length. They can lead to the Eulerian buckling of the drill string and its sticking. To predict and exclude these states, special mathematic models and methods of computer simulation should play a dominant role. At the same time, one might note that these mechanical phenomena are very complex and only simplified approaches (‘soft string drag and torque models’) are used for their analysis. Taking into consideration that now the cost of directed wells increases essentially with complication of their geometry and enlargement of their lengths, it can be concluded that the price of mistakes of the drill string behavior simulation through the use of simplified approaches can be very high and so the problem of correct software elaboration is very urgent. This paper deals with the problem of simulating the regimes of drilling deep curvilinear bore-wells with prescribed imperfect geometrical trajectories of their axial lines. On the basis of the theory of curvilinear flexible elastic rods, methods of differential geometry, and numerical analysis methods, the 3D ‘stiff-string drag and torque model’ of the drill string bending and the appropriate software are elaborated for the simulation of the tripping in and out regimes and drilling operations. It is shown by the computer calculations that the contact and friction forces can be calculated and regulated, providing predesigned trouble-free modes of operation. The elaborated mathematic models and software can be used for the emergency situations prognostication and their exclusion at the stages of the drilling process design and realization.

Keywords: curvilinear drilling, drill string tripping in and out, contact forces, resistance forces

Procedia PDF Downloads 120
360 The Influence of Morphology and Interface Treatment on Organic 6,13-bis (triisopropylsilylethynyl)-Pentacene Field-Effect Transistors

Authors: Daniel Bülz, Franziska Lüttich, Sreetama Banerjee, Georgeta Salvan, Dietrich R. T. Zahn

Abstract:

For the development of electronics, organic semiconductors are of great interest due to their adjustable optical and electrical properties. Especially for spintronic applications they are interesting because of their weak spin scattering, which leads to longer spin life times compared to inorganic semiconductors. It was shown that some organic materials change their resistance if an external magnetic field is applied. Pentacene is one of the materials which exhibit the so called photoinduced magnetoresistance which results in a modulation of photocurrent when varying the external magnetic field. Also the soluble derivate of pentacene, the 6,13-bis (triisopropylsilylethynyl)-pentacene (TIPS-pentacene) exhibits the same negative magnetoresistance. Aiming for simpler fabrication processes, in this work, we compare TIPS-pentacene organic field effect transistors (OFETs) made from solution with those fabricated by thermal evaporation. Because of the different processing, the TIPS-pentacene thin films exhibit different morphologies in terms of crystal size and homogeneity of the substrate coverage. On the other hand, the interface treatment is known to have a high influence on the threshold voltage, eliminating trap states of silicon oxide at the gate electrode and thereby changing the electrical switching response of the transistors. Therefore, we investigate the influence of interface treatment using octadecyltrichlorosilane (OTS) or using a simple cleaning procedure with acetone, ethanol, and deionized water. The transistors consist of a prestructured OFET substrates including gate, source, and drain electrodes, on top of which TIPS-pentacene dissolved in a mixture of tetralin and toluene is deposited by drop-, spray-, and spin-coating. Thereafter we keep the sample for one hour at a temperature of 60 °C. For the transistor fabrication by thermal evaporation the prestructured OFET substrates are also kept at a temperature of 60 °C during deposition with a rate of 0.3 nm/min and at a pressure below 10-6 mbar. The OFETs are characterized by means of optical microscopy in order to determine the overall quality of the sample, i.e. crystal size and coverage of the channel region. The output and transfer characteristics are measured in the dark and under illumination provided by a white light LED in the spectral range from 450 nm to 650 nm with a power density of (8±2) mW/cm2.

Keywords: organic field effect transistors, solution processed, surface treatment, TIPS-pentacene

Procedia PDF Downloads 421
359 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 189
358 Industry Symbiosis and Waste Glass Upgrading: A Feasibility Study in Liverpool Towards Circular Economy

Authors: Han-Mei Chen, Rongxin Zhou, Taige Wang

Abstract:

Glass is widely used in everyday life, from glass bottles for beverages to architectural glass for various forms of glazing. Although the mainstream of used glass is recycled in the UK, the single-use and then recycling procedure results in a lot of waste as it incorporates intact glass with smashing, re-melting, and remanufacturing. These processes bring massive energy consumption with a huge loss of high embodied energy and economic value, compared to re-use, which’s towards a ‘zero carbon’ target. As a tourism city, Liverpool has more glass bottle consumption than most less leisure-focused cities. It’s therefore vital for Liverpool to find an upgrading approach for the single-use glass bottles with low carbon output. This project aims to assess the feasibility of industrial symbiosis and upgrading the framework of glass and to investigate the ways of achieving them. It is significant to Liverpool’s future industrial strategy since it provides an opportunity to target economic recovery for post-COVID by industry symbiosis and up-grading waste management in Liverpool to respond to the climate emergency. In addition, it will influence the local government policy for glass bottle reuse and recycling in North West England and as a good practice to be further recommended to other areas of the UK. First, a critical literature review of glass waste strategies has been conducted in the UK and worldwide industrial symbiosis practices. Second, mapping, data collection, and analysis have shown the current life cycle chain and the strong links of glass reuse and upgrading potentials via site visits to 16 local waste recycling centres. The results of this research have demonstrated the understanding of the influence of key factors on the development of a circular industrial symbiosis business model for beverage glass bottles. The current waste management procedures of the glass bottle industry, its business model, supply chain, and material flow have been reviewed. The various potential opportunities for glass bottle up-valuing have been investigated towards an industrial symbiosis in Liverpool. Finally, an up-valuing business model has been developed for an industrial symbiosis framework of glass in Liverpool. For glass bottles, there are two possibilities 1) focus on upgrading processes towards re-use rather than single-use and recycling and 2) focus on ‘smart’ re-use and recycling, leading to optimised values in other sectors to create a wider industry symbiosis for a multi-level and circular economy.

Keywords: glass bottles, industry symbiosis, smart re-use, waste upgrading

Procedia PDF Downloads 81
357 Estimation of Antiurolithiatic Activity of a Biochemical Medicine, Magnesia phosphorica, in Ethylene Glycol-Induced Nephrolithiasis in Wistar Rats by Urine Analysis, Biochemical, Histopathological, and Electron Microscopic Studies

Authors: Priti S. Tidke, Chandragouda R. Patil

Abstract:

The present study was designed to investigate the effect of Magnesia phosphorica, a biochemical medicine on urine screeing, biochemical, histopathological, and electron microscopic images in ethylene glycol induced nepholithiasis in rats.Male Wistar albino rats were divided into six groups and were orally administered saline once daily (IR-sham and IR-control) or Magnesia phosphorica 100 mg/kg twice daily for 24 days.The effect of various dilutions of biochemical Mag phos3x, 6x, 30x was determined on urine output by comparing the urine volume collected by keeping individual animals in metabolic cages. Calcium oxalate urolithiasis and hyperoxaluria in male Wistar rats was induced by oral administration of 0.75% Ethylene glycol p.o. daily for 24 days. Simultaneous administration of biochemical 3x, 6x, 30xMag phos (100mg/kg p.o. twice a day) along with ethylene glycol significantly decreased calcium oxalate, urea, creatinine, Calcium, Magnesium, Chloride, Phosphorus, Albumin, Alkaline Phosphatase content in urine compared with vehicle-treated control group.After the completion of treatment period animals were sacrificed, kidneys were removed and subjected to microscopic examination for possible stone formation. Histological estimation of kidney treated with biochemical Mag phos (3x, 6x, 30xMag phos 100 mg/kg, p.o.) along with ethylene glycol inhibited the growth of calculi and reduced the number of stones in kidney compared with control group. Biochemical Mag phos of 3x dilution and its crude equivalent also showed potent diuretic and antiurolithiatic activity in ethylene glycol induced urolithiasis. A significant decrease in the weight of stones was observed after treatment in animals which received biochemical Mag phos of 3x dilution and its crude equivalent in comparison with control groups. From this study, it can be proposed that the 3x dilution of biochemical Mag phos exhibits a significant inhibitory effect on crystal growth, with the improvement of kidney function and substantiates claims on the biological activity of twelve tissue remedies which can be proved scientifically through laboratory animal studies.

Keywords: Mag phos, Magnesia phosphorica, ciochemic medicine, urolithiasis, kidney stone, ethylene glycol

Procedia PDF Downloads 396
356 The Economic Burden of Mental Disorders: A Systematic Review

Authors: Maria Klitgaard Christensen, Carmen Lim, Sukanta Saha, Danielle Cannon, Finley Prentis, Oleguer Plana-Ripoll, Natalie Momen, Kim Moesgaard Iburg, John J. McGrath

Abstract:

Introduction: About a third of the world’s population will develop a mental disorder over their lifetime. Having a mental disorder is a huge burden in health loss and cost for the individual, but also for society because of treatment cost, production loss and caregivers’ cost. The objective of this study is to synthesize the international published literature on the economic burden of mental disorders. Methods: Systematic literature searches were conducted in the databases PubMed, Embase, Web of Science, EconLit, NHS York Database and PsychInfo using key terms for cost and mental disorders. Searches were restricted to 1980 until May 2019. The inclusion criteria were: (1) cost-of-illness studies or cost-analyses, (2) diagnosis of at least one mental disorder, (3) samples based on the general population, and (4) outcome in monetary units. 13,640 publications were screened by their title/abstract and 439 articles were full-text screened by at least two independent reviewers. 112 articles were included from the systematic searches and 31 articles from snowball searching, giving a total of 143 included articles. Results: Information about diagnosis, diagnostic criteria, sample size, age, sex, data sources, study perspective, study period, costing approach, cost categories, discount rate and production loss method and cost unit was extracted. The vast majority of the included studies were from Western countries and only a few from Africa and South America. The disorder group most often investigated was mood disorders, followed by schizophrenia and neurotic disorders. The disorder group least examined was intellectual disabilities, followed by eating disorders. The preliminary results show a substantial variety in the used perspective, methodology, costs components and outcomes in the included studies. An online tool is under development enabling the reader to explore the published information on costs by type of mental disorder, subgroups, country, methodology, and study quality. Discussion: This is the first systematic review synthesizing the economic cost of mental disorders worldwide. The paper will provide an important and comprehensive overview over the economic burden of mental disorders, and the output from this review will inform policymaking.

Keywords: cost-of-illness, health economics, mental disorders, systematic review

Procedia PDF Downloads 102
355 The Mapping of Pastoral Area as a Basis of Ecological for Beef Cattle in Pinrang Regency, South Sulawesi, Indonesia

Authors: Jasmal A. Syamsu, Muhammad Yusuf, Hikmah M. Ali, Mawardi A. Asja, Zulkharnaim

Abstract:

This study was conducted and aimed in identifying and mapping the pasture as an ecological base of beef cattle. A survey was carried out during a period of April to June 2016, in Suppa, Mattirobulu, the district of Pinrang, South Sulawesi province. The mapping process of grazing area was conducted in several stages; inputting and tracking of data points into Google Earth Pro (version 7.1.4.1529), affirmation and confirmation of tracking line visualized by satellite with a variety of records at the point, a certain point and tracking input data into ArcMap Application (ArcGIS version 10.1), data processing DEM/SRTM (S04E119) with respect to the location of the grazing areas, creation of a contour map (a distance of 5 m) and mapping tilt (slope) of land and land cover map-making. Analysis of land cover, particularly the state of the vegetation was done through the identification procedure NDVI (Normalized Differences Vegetation Index). This procedure was performed by making use of the Landsat-8. The results showed that the topography of the grazing areas of hills and some sloping surfaces and flat with elevation vary from 74 to 145 above sea level (asl), while the requirements for growing superior grass and legume is an altitude of up to 143-159 asl. Slope varied between 0 - > 40% and was dominated by a slope of 0-15%, according to the slope/topography pasture maximum of 15%. The range of NDVI values for pasture image analysis results was between 0.1 and 0.27. Characteristics of vegetation cover of pasture land in the category of vegetation density were low, 70% of the land was the land for cattle grazing, while the remaining approximately 30% was a grove and forest included plant water where the place for shelter of the cattle during the heat and drinking water supply. There are seven types of graminae and 5 types of legume that was dominant in the region. Proportionally, graminae class dominated up 75.6% and legume crops up to 22.1% and the remaining 2.3% was another plant trees that grow in the region. The dominant weed species in the region were Cromolaenaodorata and Lantana camara, besides that there were 6 types of floor plant that did not include as forage fodder.

Keywords: pastoral, ecology, mapping, beef cattle

Procedia PDF Downloads 321
354 Investigation a New Approach "AGM" to Solve of Complicate Nonlinear Partial Differential Equations at All Engineering Field and Basic Science

Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza Khalili, Davood Domiri Danji

Abstract:

In this conference, our aims are accuracy, capabilities and power at solving of the complicated non-linear partial differential. Our purpose is to enhance the ability to solve the mentioned nonlinear differential equations at basic science and engineering field and similar issues with a simple and innovative approach. As we know most of engineering system behavior in practical are nonlinear process (especially basic science and engineering field, etc.) and analytical solving (no numeric) these problems are difficult, complex, and sometimes impossible like (Fluids and Gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure an innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will emerge after comparing the achieved solutions by Numerical method (Runge-Kutta 4th). Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear partial differential equations, with help of that there is no difficulty for solving all nonlinear differential equations. Advantages and ability of this method (AGM) as follow: (a) Non-linear Differential equations (ODE, PDE) are directly solvable by this method. (b) In this method (AGM), most of the time, without any dimensionless procedure, we can solve equation(s) by any boundary or initial condition number. (c) AGM method always is convergent in boundary or initial condition. (d) Parameters of exponential, Trigonometric and Logarithmic of the existent in the non-linear differential equation with AGM method no needs Taylor expand which are caused high solve precision. (e) AGM method is very flexible in the coding system, and can solve easily varieties of the non-linear differential equation at high acceptable accuracy. (f) One of the important advantages of this method is analytical solving with high accuracy such as partial differential equation in vibration in solids, waves in water and gas, with minimum initial and boundary condition capable to solve problem. (g) It is very important to present a general and simple approach for solving most problems of the differential equations with high non-linearity in engineering sciences especially at civil engineering, and compare output with numerical method (Runge-Kutta 4th) and Exact solutions.

Keywords: new approach, AGM, sets of coupled nonlinear differential equation, exact solutions, numerical

Procedia PDF Downloads 430
353 The Processing of Implicit Stereotypes in Contexts of Reading, Using Eye-Tracking and Self-Paced Reading Tasks

Authors: Magali Mari, Misha Muller

Abstract:

The present study’s objectives were to determine how diverse implicit stereotypes affect the processing of written information and linguistic inferential processes, such as presupposition accommodation. When reading a text, one constructs a representation of the described situation, which is then updated, according to new outputs and based on stereotypes inscribed within society. If the new output contradicts stereotypical expectations, the representation must be corrected, resulting in longer reading times. A similar process occurs in cases of linguistic inferential processes like presupposition accommodation. Presupposition accommodation is traditionally regarded as fast, automatic processing of background information (e.g., ‘Mary stopped eating meat’ is quickly processed as Mary used to eat meat). However, very few accounts have investigated if this process is likely to be influenced by domains of social cognition, such as implicit stereotypes. To study the effects of implicit stereotypes on presupposition accommodation, adults were recorded while they read sentences in French, combining two methods, an eye-tracking task and a classic self-paced reading task (where participants read sentence segments at their own pace by pressing a computer key). In one condition, presuppositions were activated with the French definite articles ‘le/la/les,’ whereas in the other condition, the French indefinite articles ‘un/une/des’ was used, triggering no presupposition. Using a definite article presupposes that the object has already been uttered and is thus part of background information, whereas using an indefinite article is understood as the introduction of new information. Two types of stereotypes were under examination in order to enlarge the scope of stereotypes traditionally analyzed. Study 1 investigated gender stereotypes linked to professional occupations to replicate previous findings. Study 2 focused on nationality-related stereotypes (e.g. ‘the French are seducers’ versus ‘the Japanese are seducers’) to determine if the effects of implicit stereotypes on reading are generalizable to other types of implicit stereotypes. The results show that reading is influenced by the two types of implicit stereotypes; in the two studies, the reading pace slowed down when a counter-stereotype was presented. However, presupposition accommodation did not affect participants’ processing of information. Altogether these results show that (a) implicit stereotypes affect the processing of written information, regardless of the type of stereotypes presented, and (b) that implicit stereotypes prevail over the superficial linguistic treatment of presuppositions, which suggests faster processing for treating social information compared to linguistic information.

Keywords: eye-tracking, implicit stereotypes, reading, social cognition

Procedia PDF Downloads 175
352 Economics of Precision Mechanization in Wine and Table Grape Production

Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka

Abstract:

The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.

Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes

Procedia PDF Downloads 237
351 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 196
350 Improving Student Retention: Enhancing the First Year Experience through Group Work, Research and Presentation Workshops

Authors: Eric Bates

Abstract:

Higher education is recognised as being of critical importance in Ireland and has been linked as a vital factor to national well-being. Statistics show that Ireland has one of the highest rates of higher education participation in Europe. However, student retention and progression, especially in Institutes of Technology, is becoming an issue as rates on non-completion rise. Both within Ireland and across Europe student retention is seen as a key performance indicator for higher education and with these increasing rates the Irish higher education system needs to be flexible and adapt to the situation it now faces. The author is a Programme Chair on a Level 6 full time undergraduate programme and experience to date has shown that the first year undergraduate students take some time to identify themselves as a group within the setting of a higher education institute. Despite being part of a distinct class on a specific programme some individuals can feel isolated as he or she take the first step into higher education. Such feelings can contribute to students eventually dropping out. This paper reports on an ongoing initiative that aims to accelerate the bonding experience of a distinct group of first year undergraduates on a programme which has a high rate of non-completion. This research sought to engage the students in dynamic interactions with their peers to quickly evolve a group sense of coherence. Two separate modules – a Research Module and a Communications module - delivered by the researcher were linked across two semesters. Students were allocated into random groups and each group was given a topic to be researched. There were six topics – essentially the six sub-headings on the DIT Graduate Attribute Statement. The research took place in a computer lab and students also used the library. The output from this was a document that formed part of the submission for the Research Module. In the second semester the groups then had to make a presentation of their findings where each student spoke for a minimum amount of time. Presentation workshops formed part of that module and students were given the opportunity to practice their presentation skills. These presentations were video recorded to enable feedback to be given. Although this was a small scale study preliminary results found a strong sense of coherence among this particular cohort and feedback from the students was very positive. Other findings indicate that spreading the initiative across two semesters may have been an inhibitor. Future challenges include spreading such Initiatives College wide and indeed sector wide.

Keywords: first year experience, student retention, group work, presentation workshops

Procedia PDF Downloads 214
349 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 103
348 Using a Phenomenological Approach to Explore the Experiences of Nursing Students in Coping with Their Emotional Responses in Caring for End-Of-Life Patients

Authors: Yun Chan Lee

Abstract:

Background: End-of-life care is a large area of all nursing practice and student nurses are likely to meet dying patients in many placement areas. It is therefore important to understand the emotional responses and coping strategies of student nurses in order for nursing education systems to have some appreciation of how nursing students might be supported in the future. Methodology: This research used a qualitative phenomenological approach. Six student nurses understanding a degree-level adult nursing course were interviewed. Their responses to questions were analyzed using interpretative phenomenological analysis. Finding: The findings identified 3 main themes. First, the common experience of ‘unpreparedness’. A very small number of participants felt that this was unavoidable and that ‘no preparation is possible’, the majority felt that they were unprepared because of ‘insufficient input’ from the university and as a result of wider ‘social taboos’ around death and dying. The second theme showed that emotions were affected by ‘the personal connection to the patient’ and the important sub-themes of ‘the evoking of memories’, ‘involvement in care’ and ‘sense of responsibility’. The third theme, the coping strategies used by students, seemed to fall into two broad areas those ‘internal’ with the student and those ‘external’. In terms of the internal coping strategies, ‘detachment’, ‘faith’, ‘rationalization’ and ‘reflective skills’ are the important components of this part. Regarding the external coping strategies, ‘clinical staff’ and ‘the importance of family and friends’ are the importance of accessing external forms of support. Implication: It is clear that student nurses are affected emotionally by caring for dying patients and many of them have apprehension even before they begin on their placements but very often this is unspoken. Those anxieties before the placement become more pronounced during and continue after the placements. This has implications for when support is offered and possibly its duration. Another significant point of the study is that participants often highlighted their wish to speak to qualified nurses after their experiences of being involved in end-of-life care and especially when they had been present at the time of death. Many of the students spoke that qualified nurses were not available to them. This seemed to be due to a number of reasons. Because the qualified nurses were not available, students had to make use of family members and friends to talk to. Consequently, the implication of this study is not only to educate student nurses but also to educate the qualified mentors on the importance of providing emotional support to students.

Keywords: nursing students, coping strategies, end-of-life care, emotional responses

Procedia PDF Downloads 137
347 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions

Authors: Erva Akin

Abstract:

– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.

Keywords: artificial intelligence, copyright, data governance, machine learning

Procedia PDF Downloads 60
346 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging

Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati

Abstract:

Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.

Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization

Procedia PDF Downloads 52
345 Vibrational Spectra and Nonlinear Optical Investigations of a Chalcone Derivative (2e)-3-[4-(Methylsulfanyl) Phenyl]-1-(3-Bromophenyl) Prop-2-En-1-One

Authors: Amit Kumar, Archana Gupta, Poonam Tandon, E. D. D’Silva

Abstract:

Nonlinear optical (NLO) materials are the key materials for the fast processing of information and optical data storage applications. In the last decade, materials showing nonlinear optical properties have been the object of increasing attention by both experimental and computational points of view. Chalcones are one of the most important classes of cross conjugated NLO chromophores that are reported to exhibit good SHG efficiency, ultra fast optical nonlinearities and are easily crystallizable. The basic structure of chalcones is based on the π-conjugated system in which two aromatic rings are connected by a three-carbon α, β-unsaturated carbonyl system. Due to the overlap of π orbitals, delocalization of electronic charge distribution leads to a high mobility of the electron density. On a molecular scale, the extent of charge transfer across the NLO chromophore determines the level of SHG output. Hence, the functionalization of both ends of the π-bond system with appropriate electron donor and acceptor groups can enhance the asymmetric electronic distribution in either or both ground and excited states, leading to an increased optical nonlinearity. In this research, the experimental and theoretical study on the structure and vibrations of (2E)-3-[4-(methylsulfanyl) phenyl]-1-(3-bromophenyl) prop-2-en-1-one (3Br4MSP) is presented. The FT-IR and FT-Raman spectra of the NLO material in the solid phase have been recorded. Density functional theory (DFT) calculations at B3LYP with 6-311++G(d,p) basis set were carried out to study the equilibrium geometry, vibrational wavenumbers, infrared absorbance and Raman scattering activities. The interpretation of vibrational features (normal mode assignments, for instance) has an invaluable aid from DFT calculations that provide a quantum-mechanical description of the electronic energies and forces involved. Perturbation theory allows one to obtain the vibrational normal modes by estimating the derivatives of the Kohn−Sham energy with respect to atomic displacements. The molecular hyperpolarizability β plays a chief role in the NLO properties, and a systematical study on β has been carried out. Furthermore, the first order hyperpolarizability (β) and the related properties such as dipole moment (μ) and polarizability (α) of the title molecule are evaluated by Finite Field (FF) approach. The electronic α and β of the studied molecule are 41.907×10-24 and 79.035×10-24 e.s.u. respectively, indicating that 3Br4MSP can be used as a good nonlinear optical material.

Keywords: DFT, MEP, NLO, vibrational spectra

Procedia PDF Downloads 198
344 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study

Authors: Thomas Arink, Isam Janajreh

Abstract:

The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).

Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires

Procedia PDF Downloads 496
343 The Effect of Acute Consumption of a Nutritional Supplement Derived from Vegetable Extracts Rich in Nitrate on Athletic Performance

Authors: Giannis Arnaoutis, Dimitra Efthymiopoulou, Maria-Foivi Nikolopoulou, Yannis Manios

Abstract:

AIM: Nitrate-containing supplements have been used extensively as ergogenic in many sports. However, extract fractions from plant-based nutritional sources high in nitrate and their effect on athletic performance, has not been systematically investigated. The purpose of the present study was to examine the possible effect of acute consumption of a “smart mixture” from beetroot and rocket on exercise capacity. MATERIAL & METHODS: 12 healthy, nonsmoking, recreationally active, males (age: 25±4 years, % fat: 15.5±5.7, Fat Free Mass: 65.8±5.6 kg, VO2 max: 45.46.1 mL . kg -1 . min -1) participated in a double-blind, placebo-controlled trial study, in a randomized and counterbalanced order. Eligibility criteria for participation in this study included normal physical examination, and absence of any metabolic, cardiovascular, or renal disease. All participants completed a time to exhaustion cycling test at 75% of their maximum power output, twice. The subjects consumed either capsules containing 360 mg of nitrate in total or placebo capsules, in the morning, under fasted state. After 3h of passive recovery the performance test followed. Blood samples were collected upon arrival of the participants and 3 hours after the consumption of the corresponding capsules. Time until exhaustion, pre- and post-test lactate concentrations, and rate of perceived exertion for the same time points were assessed. RESULTS: Paired-sample t-test analysis found a significant difference in time to exhaustion between the trial with the nitrate consumption versus placebo [16.1±3.0 Vs 13.5±2.6 min, p=0.04] respectively. No significant differences were observed for the concentrations of lactic acid as well as for the values in the Borg scale between the two trials (p>0.05). CONCLUSIONS: Based on the results of the present study, it appears that a nutritional supplement derived from vegetable extracts rich in nitrate, improves athletic performance in recreationally active young males. However, the precise mechanism is not clear and future studies are needed. Acknowledgment: This research has been co‐financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code:T2EDK-00843).

Keywords: sports performance, ergogenic supplements, nitrate, extract fractions

Procedia PDF Downloads 45
342 Carbapenem Usage in Medical Wards: An Antibiotic Stewardship Feedback Project

Authors: Choon Seong Ng, P. Petrick, C. L. Lau

Abstract:

Background: Carbapenem-resistant isolates have been increasingly reported recently. Carbapenem stewardship is designed to optimize its usage particularly among medical wards with high prevalence of carbapenem prescriptions to combat such emerging resistance. Carbapenem stewardship programmes (CSP) can reduce antibiotic use but clinical outcome of such measures needs further evaluation. We examined this in a prospective manner using feedback mechanism. Methods: Our single-center prospective cohort study involved all carbapenem prescriptions across the medical wards (including medical patients admitted to intensive care unit) in a tertiary university hospital setting. The impact of such stewardship was analysed according to the accepted and the rejected groups. The primary endpoint was safety. Safety measure applied in this study was the death at 1 month. Secondary endpoints included length of hospitalisation and readmission. Results: Over the 19 months’ period, input from 144 carbapenem prescriptions was analysed on the basis of acceptance of our CSP recommendations on the use of carbapenems. Recommendations made were as follows : de-escalation of carbapenem; stopping the carbapenem; use for a short duration of 5-7 days; required prolonged duration in the case of carbapenem-sensitive Extended Spectrum Beta-Lactamases bacteremia; dose adjustment; and surgical intervention for removal of septic foci. De-escalation, shorten duration of carbapenem and carbapenem cessation comprised 79% of the recommendations. Acceptance rate was 57%. Those who accepted CSP recommendations had no increase in mortality (p = 0.92), had a shorter length of hospital stay (LOS) and had cost-saving. Infection-related deaths were found to be higher among those in the rejected group. Moreover, three rejected cases (6%) among all non-indicated cases (n = 50) were found to have developed carbapenem-resistant isolates. Lastly, Pitt’s bacteremia score appeared to be a key element affecting the carbapenem prescription’s behaviour in this trial. Conclusions: Carbapenem stewardship program in the medical wards not only saves money, but most importantly it is safe and does not harm the patients with added benefits of reducing the length of hospital stay. However, more time is needed to engage the primary clinical teams by formal clinical presentation and immediate personal feedback by senior Infectious Disease (ID) personnel to increase its acceptance.

Keywords: audit and feedback, carbapenem stewardship, medical wards, university hospital

Procedia PDF Downloads 182
341 Reading against the Grain: Transcodifying Stimulus Meaning

Authors: Aba-Carina Pârlog

Abstract:

On translating, reading against the grain results in a wrong effect in the TL. Quine’s ocular irradiation plays an important part in the process of understanding and translating a text. The various types of textual radiation must be rendered by the translator by paying close attention to the types of field that produce it. The literary work must be seen as an indirect cause of an expressive effect in the TL that is supposed to be similar to the effect it has in the SL. If the adaptive transformative codes are so flexible that they encourage the translator to repeatedly leave out parts of the original work, then a subversive pattern emerges which changes the entire book. In this case, the translator is a writer per se who decides what goes in and out of the book, how the style is to be ciphered and what elements of ideology are to be highlighted. Figurative language must not be flattened for the sake of clarity or naturalness. The missing figurative elements make the translated text less interesting, less challenging and less vivid which reflects poorly on the writer. There is a close connection between style and the writer’s person. If the writer’s style is very much changed in a translation, the translation is useless as the original writer and his / her imaginative world can no longer be discovered. Then, a different writer appears and his / her creation surfaces. Changing meaning considered as a “negative shift” in translation defines one of the faulty transformative codes used by some translators. It is a dangerous tool which leads to adaptations that sometimes reflect the original less than the reader would wish to. It contradicts the very essence of the process of translation which is that of making a work available in a foreign language. Employing speculative aesthetics at the level of a text indicates the wish to create manipulative or subversive effects in the translated work. This is generally achieved by adding new words or connotations, creating new figures of speech or using explicitations. The irradiation patterns of the original work are neglected and the translator creates new meanings, implications, emphases and contexts. Again s/he turns into a new author who enjoys the freedom of expressing his / her ideas without the constraints of the original text. The stimulus meaning of a text is very important for a translator which is why reading against the grain is unadvisable during the process of translation. By paying attention to the waves of the SL input, a faithful literary work is produced which does not contradict general knowledge about foreign cultures and civilizations. Following personal common sense is essential in the field of translation as well as everywhere else.

Keywords: stimulus meaning, substance of expression, transformative code, translation

Procedia PDF Downloads 427
340 Numerical Reproduction of Hemodynamic Change Induced by Acupuncture to ST-36

Authors: Takuya Suzuki, Atsushi Shirai, Takashi Seki

Abstract:

Acupuncture therapy is one of the treatments in traditional Chinese medicine. Recently, some reports have shown the effectiveness of acupuncture. However, its full acceptance has been hindered by the lack of understanding on mechanism of the therapy. Acupuncture applied to Zusanli (ST-36) enhances blood flow volume in superior mesenteric artery (SMA), yielding peripheral vascular resistance – regulated blood flow of SMA dominated by the parasympathetic system and inhibition of sympathetic system. In this study, a lumped-parameter approximation model of blood flow in the systemic arteries was developed. This model was extremely simple, consisting of the aorta, carotid arteries, arteries of the four limbs and SMA, and their peripheral vascular resistances. Here, the individual artery was simplified to a tapered tube and the resistances were modelled by a linear resistance. We numerically investigated contribution of the peripheral vascular resistance of SMA to the systemic blood distribution using this model. In addition to the upstream end of the model, which correlates with the left ventricle, two types of boundary condition were applied; mean left ventricular pressure which correlates with blood pressure (BP) and mean cardiac output which corresponds to cardiac index (CI). We examined it to reproduce the experimentally obtained hemodynamic change, in terms of the ratio of the aforementioned hemodynamic parameters from their initial values before the acupuncture, by regulating the peripheral vascular resistances and the upstream boundary condition. First, only the peripheral vascular resistance of SMA was changed to show contribution of the resistance to the change in blood flow volume in SMA, expecting reproduction of the experimentally obtained change. It was found, however, this was not enough to reproduce the experimental result. Then, we also changed the resistances of the other arteries together with the value given at upstream boundary. Here, the resistances of the other arteries were changed simultaneously in the same amount. Consequently, we successfully reproduced the hemodynamic change to find that regulation of the upstream boundary condition to the value experimentally obtained after the stimulation is necessary for the reproduction, though statistically significant changes in BP and CI were not observed in the experiment. It is generally known that sympathetic and parasympathetic tones take part in regulation of whole the systemic circulation including the cardiac function. The present result indicates that stimulation to ST-36 could induce vasodilation of peripheral circulation of SMA and vasoconstriction of that of other arteries. In addition, it implies that experimentally obtained small changes in BP and CI induced by the acupuncture may be involved in the therapeutic response.

Keywords: acupuncture, hemodynamics, lumped-parameter approximation, modeling, systemic vascular resistance

Procedia PDF Downloads 206
339 Preliminary WRF SFIRE Simulations over Croatia during the Split Wildfire in July 2017

Authors: Ivana Čavlina Tomašević, Višnjica Vučetić, Maja Telišman Prtenjak, Barbara Malečić

Abstract:

The Split wildfire on the mid-Adriatic Coast in July 2017 is one of the most severe wildfires in Croatian history, given the size and unexpected fire behavior, and it is used in this research as a case study to run the Weather Research and Forecasting Spread Fire (WRF SFIRE) model. This coupled fire-atmosphere model was successfully run for the first time ever for one Croatian wildfire case. Verification of coupled simulations was possible by using the detailed reconstruction of the Split wildfire. Specifically, precise information on ignition time and location, together with mapped fire progressions and spotting within the first 30 hours of the wildfire, was used for both – to initialize simulations and to evaluate the model’s ability to simulate fire’s propagation and final fire scar. The preliminary simulations were obtained using high-resolution vegetation and topography data for the fire area, additionally interpolated to fire grid spacing at 33.3 m. The results demonstrated that the WRF SFIRE model has the ability to work with real data from Croatia and produce adequate results for forecasting fire spread. As the model in its setup has the ability to include and exclude the energy fluxes between the fire and the atmosphere, this was used to investigate possible fire-atmosphere interactions during the Split wildfire. Finally, successfully coupled simulations provided the first numerical evidence that a wildfire from the Adriatic coast region can modify the dynamical structure of the surrounding atmosphere, which agrees with observations from fire grounds. This study has demonstrated that the WRF SFIRE model has the potential for operational application in Croatia with more accurate fire predictions in the future, which could be accomplished by inserting the higher-resolution input data into the model without interpolation. Possible uses for fire management in Croatia include prediction of fire spread and intensity that may vary under changing weather conditions, available fuels and topography, planning effective and safe deployment of ground and aerial firefighting forces, preventing wildland-urban interface fires, effective planning of evacuation routes etc. In addition, the WRF SFIRE model results from this research demonstrated that the model is important for fire weather research and education purposes in order to better understand this hazardous phenomenon that occurs in Croatia.

Keywords: meteorology, agrometeorology, fire weather, wildfires, couple fire-atmosphere model

Procedia PDF Downloads 62