Search results for: mobile grid computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3410

Search results for: mobile grid computing

260 Usage of Cyanobacteria in Battery: Saving Money, Enhancing the Storage Capacity, Making Portable, and Supporting the Ecology

Authors: Saddam Husain Dhobi, Bikrant Karki

Abstract:

The main objective of this paper is save money, balance ecosystem of the terrestrial organism, control global warming, and enhancing the storage capacity of the battery with requiring weight and thinness by using Cyanobacteria in the battery. To fulfill this purpose of paper we can use different methods: Analysis, Biological, Chemistry, theoretical and Physics with some engineering design. Using this different method, we can produce the special type of battery that has the long life, high storage capacity, and clean environment, save money so on and by using the byproduct of Cyanobacteria i.e. glucose. Cyanobacteria are a special type of bacteria that produces different types of extracellular glucoses and oxygen with the help of little sunlight, water, and carbon dioxide and can survive in freshwater, marine and in the land as well. In this process, O₂ is more in the comparison to plant due to rapid growth rate of Cyanobacteria. The required materials are easily available in this process to produce glucose with the help of Cyanobacteria. Since CO₂, is greenhouse gas that causes the global warming? We can utilize this gas and save our ecological balance and the byproduct (glucose) C₆H₁₂O₆ can be utilized for raw material for the battery where as O₂ escape is utilized by living organism. The glucose produce by Cyanobateria goes on Krebs's Cycle or Citric Acid Cycle, in which glucose is complete, oxidizes and all the available energy from glucose molecule has been release in the form of electron and proton as energy. If we use a suitable anodes and cathodes, we can capture these electrons and protons to produce require electricity current with the help of byproduct of Cyanobacteria. According to "Virginia Tech Bio-battery" and "Sony" 13 enzymes and the air is used to produce nearly 24 electrons from a single glucose unit. In this output power of 0.8 mW/cm, current density of 6 mA/cm, and energy storage density of 596 Ah/kg. This last figure is impressive, at roughly 10 times the energy density of the lithium-ion batteries in your mobile devices. When we use Cyanobacteria in battery, we are able to reduce Carbon dioxide, Stop global warming, and enhancing the storage capacity of battery more than 10 times that of lithium battery, saving money, balancing ecology. In this way, we can produce energy from the Cyanobacteria and use it in battery for different benefits. In addition, due to the mass, size and easy cultivation, they are better to maintain the size of battery. Hence, we can use Cyanobacteria for the battery having suitable size, enhancing the storing capacity of battery, helping the environment, portability and so on.

Keywords: anode, byproduct, cathode, cyanobacteri, glucose, storage capacity

Procedia PDF Downloads 318
259 Identification of Electric Energy Storage Acceptance Types: Empirical Findings from the German Manufacturing Industry

Authors: Dominik Halstrup, Marlene Schriever

Abstract:

The industry, as one of the main energy consumer, is of critical importance along the way of transforming the energy system to Renewable Energies. The distributed character of the Energy Transition demands for further flexibility being introduced to the grid. In order to shed further light on the acceptance of Electric Energy Storage (ESS) from an industrial point of view, this study therefore examines the German manufacturing industry. The analysis in this paper uses data composed of a survey amongst 101 manufacturing companies in Germany. Being part of a two-stage research design, both qualitative and quantitative data was collected. Based on a literature review an acceptance concept was developed in the paper and four user-types identified: (Dedicated) User, Impeded User, Forced User and (Dedicated) Non-User and incorporated in the questionnaire. Both descriptive and bivariate analysis is deployed to identify the level of acceptance in the different organizations. After a factor analysis has been conducted, variables were grouped to form independent acceptance factors. Out of the 22 organizations that do show a positive attitude towards ESS, 5 have already implemented ESS and show a positive attitude towards ESS. They can be therefore considered ‘Dedicated Users’. The remaining 17 organizations have a positive attitude but have not implemented ESS yet. The results suggest that profitability plays an important role as well as load-management systems that are already in place. Surprisingly, 2 organizations have implemented ESS even though they have a negative attitude towards it. This is an example for a ‘Forced User’ where reasons of overriding importance or supporters with overriding authority might have forced the company to implement ESS. By far the biggest subset of the sample shows (critical) distance and can therefore be considered ‘(Dedicated) Non-Users’. The results indicate that the majority of the respondents have not thought ESS in their own organization through yet. For the majority of the sample one can therefore not speak of critical distance but rather a distance due to insufficient information and the perceived unprofitability. This paper identifies the relative state of acceptance of ESS in the manufacturing industry as well as current reasons for hindrance and perspectives for future growth of ESS in an industrial setting from a policy level. The interest that is currently generated by the media could be channeled and taken into a more substantial and individual discussion about ESS in an industrial setting. If the current perception of profitability could be addressed and communicated accordingly, ESS and their use in for instance cooperative business models could become a topic for more organizations in Germany and other parts of the world. As price mechanisms tend to favor existing technologies, policy makers need to further access the use of ESS and acknowledge the positive effects when integrated in an energy system. The subfields of generation, transmission and distribution become increasingly intertwined. New technologies and business models, such as ESS or cooperative arrangements entering the market, increase the number of stakeholders. Organizations need to find their place within this array of stakeholders.

Keywords: acceptance, energy storage solutions, German energy transition, manufacturing industry

Procedia PDF Downloads 200
258 Drug Susceptibility and Genotypic Assessment of Mycobacterial Isolates from Pulmonary Tuberculosis Patients in North East Ethiopia

Authors: Minwuyelet Maru, Solomon Habtemariam, Endalamaw Gadissa, Abraham Aseffa

Abstract:

Background: Tuberculosis is a major public health problem in Ethiopia. The burden of TB is aggravated by emergence and expansion of drug resistant tuberculosis and different lineages of Mycobacterium tuberculosis (M. tuberculosis) have been reported in many parts of the country. Describing strains of Mycobacterial isolates and drug susceptibility pattern is necessary. Method: Sputum samples were collected from smear positive pulmonary TB patients age >= 7 years between October 1, 2012 to September 30, 2013 and Mycobacterial strains isolated on Loweensten Jensen (LJ) media. Each strain was characterized by deletion typing and Spoligotyping. Drug sensitivity testing was determined with the indirect proportion method using Middle brook 7H10 media and association to determine possible risk factors to drug resistance was done. Result: A total of 144 smear positive pulmonary tuberculosis patients were enrolled. The age of participants ranged from 7 to 78 with mean age of 29.22 (±10.77) years. In this study 82.2% (n=97) of the isolates were sensitive to the four first line anti-tuberculosis drugs and resistance to any of the four drugs tested was 17.8% (n=21). A high frequency of any resistance was observed in isoniazid, 13.6%, (n=16) followed by streptomycin, 11.8% (n=14). No significant association of isoniazid resistance with HIV, sex and history of previous TB treatment was observed but there was significant association with age, high between 31-35 years of age (p=0.01). Majority, 89.9% (n=128) of participants were new cases and only 11.1% (n=16) had history of previous TB treatment. No MDR-TB from new cases and 2 MDRTB (13.3%) was isolated from re-treatment cases which was significantly associated with previous TB treatment (p<0.01). Thirty two different types of spoligotype patterns were identified and 74.1% were grouped in to 13 clusters. The dominant strains were SIT 25, 18.1% (n=21), SIT 53, 17.2% (n=20) and SIT 149, 8.6% (n=10). Lineage 4 is the predominant lineage followed by lineage 3 and lineage 7 comprising 65.5% (n=76), 28.4% (n=33) and 6% (n=7) respectively. Majority of strains from lineage 3 and 4 were SIT 25 (63.6%) and SIT 53 (26.3%) whereas SIT 343 was the dominant strain from lineage 7 (71.4%). Conclusion: Wide spread of lineage 3 and lineage 4 of the modern lineage and high number of strain cluster indicates high ongoing transmission. The high proportion resistance to any of the first line anti-tuberculosis drugs may be a potential source in the emergence of MDR-TB. Wide spread of SIT 25 and SIT 53 having a tendency of ease transmission and presence of higher resistance of isoniazid in working and mobile age group, 31-35 years of age may increase risk of drug resistant strains transmission.

Keywords: tuberculosis, drug susceptibility, strain diversity, lineage, Ethiopia, spoligotyping

Procedia PDF Downloads 350
257 Active Ageing a Way Forward to Healthy Ageing Among the Rural Elderly Women

Authors: Hannah Evangeline Sangeetha

Abstract:

Ageing is an inevitable change in the life span of an individual. India’s old age population has increased from 19 million in 1947 to 100 million in the 21st century. The United Nations World Population ageing reports that the grey population has immensely increased from 9.2% in 1990 to 11.7 % in 2013, and it’s expected to triple by the year 2050 growing from 737 million to over 2 billion persons 60 years of age and older. Ageing is a period of physical, mental and social decline which brings a host of challenges to the individual and the family. Hence it requires attention at the micro, mezzo and the macro levels of the society. The concepts of healthy and successful aging are being used to help people to change their negative attitude towards aging. This perspective is important to make people realize their potentialities to bring about a change in the minds of senior citizens as well as the society. The objective of this study was to understand the level of active ageing among the rural elderly women and its impact on the quality of life. 330 elderly women from 12 villages of Sriperumbudur associated with the Mobile medical care of Help age India were interviewed using census method. The study revealed the following findings; most respondents in this study were young old between the age group of 60 to 75 years. All the three major religious groups were represented, 85.5percent were Hindus. Majority of the respondents 73.3percent had no education. It was interesting to know that majority of the respondents were self reliant (83.94 percent) and 82.73 percent of them very independent and took care of them by themselves (activities of daily living) without any support from their families. 76.9 percent of the senior women worked based on their competencies, 75.5 percent of them were involved in plenty of activities everyday including their occupation and household chores, which enabled them to be physically active. The chi square values that there is a significant association between the overall active ageing score, religion &number of members in the family. The other demographic variables like age, occupation, income marital status, age at marriage, number of children in the family and Socio –Economic Status were not significantly associated with the overall active aging score. The p-value 0.032 showed Social network and being self-reliant are significantly associated. The study surprisingly shows that most women enjoyed freedom and Independence in their family which is a positive indicator of active ageing.

Keywords: active ageing, quality of life, independence, self reliance

Procedia PDF Downloads 120
256 Feasibility Study and Experiment of On-Site Nuclear Material Identification in Fukushima Daiichi Fuel Debris by Compact Neutron Source

Authors: Yudhitya Kusumawati, Yuki Mitsuya, Tomooki Shiba, Mitsuru Uesaka

Abstract:

After the Fukushima Daiichi nuclear power reactor incident, there are a lot of unaccountable nuclear fuel debris in the reactor core area, which is subject to safeguard and criticality safety. Before the actual precise analysis is performed, preliminary on-site screening and mapping of nuclear debris activity need to be performed to provide a reliable data on the nuclear debris mass-extraction planning. Through a collaboration project with Japan Atomic Energy Agency, an on-site nuclear debris screening system by using dual energy X-Ray inspection and neutron energy resonance analysis has been established. By using the compact and mobile pulsed neutron source constructed from 3.95 MeV X-Band electron linac, coupled with Tungsten as electron-to-photon converter and Beryllium as a photon-to-neutron converter, short-distance neutron Time of Flight measurement can be performed. Experiment result shows this system can measure neutron energy spectrum up to 100 eV range with only 2.5 meters Time of Flightpath in regards to the X-Band accelerator’s short pulse. With this, on-site neutron Time of Flight measurement can be used to identify the nuclear debris isotope contents through Neutron Resonance Transmission Analysis (NRTA). Some preliminary NRTA experiments have been done with Tungsten sample as dummy nuclear debris material, which isotopes Tungsten-186 has close energy absorption value with Uranium-238 (15 eV). The results obtained shows that this system can detect energy absorption in the resonance neutron area within 1-100 eV. It can also detect multiple elements in a material at once with the experiment using a combined sample of Indium, Tantalum, and silver makes it feasible to identify debris containing mixed material. This compact neutron Time of Flight measurement system is a great complementary for dual energy X-Ray Computed Tomography (CT) method that can identify atomic number quantitatively but with 1-mm spatial resolution and high error bar. The combination of these two measurement methods will able to perform on-site nuclear debris screening at Fukushima Daiichi reactor core area, providing the data for nuclear debris activity mapping.

Keywords: neutron source, neutron resonance, nuclear debris, time of flight

Procedia PDF Downloads 214
255 Analyzing the Perception of Social Networking Sites as a Learning Tool among University Students: Case Study of a Business School in India

Authors: Bhaskar Basu

Abstract:

Universities and higher education institutes are finding it increasingly difficult to engage students fruitfully through traditional pedagogic tools. Web 2.0 technologies comprising social networking sites (SNSs) offer a platform for students to collaborate and share information, thereby enhancing their learning experience. Despite the potential and reach of SNSs, its use has been limited in academic settings promoting higher education. The purpose of this paper is to assess the perception of social networking sites among business school students in India and analyze its role in enhancing quality of student experiences in a business school leading to the proposal of an agenda for future research. In this study, more than 300 students of a reputed business school were involved in a survey of their preferences of different social networking sites and their perceptions and attitudes towards these sites. A questionnaire with three major sections was designed, validated and distributed among  a sample of students, the research method being descriptive in nature. Crucial questions were addressed to the students concerning time commitment, reasons for usage, nature of interaction on these sites, and the propensity to share information leading to direct and indirect modes of learning. It was further supplemented with focus group discussion to analyze the findings. The paper notes the resistance in the adoption of new technology by a section of business school faculty, who are staunch supporters of the classical “face-to-face” instruction. In conclusion, social networking sites like Facebook and LinkedIn provide new avenues for students to express themselves and to interact with one another. Universities could take advantage of the new ways  in which students are communicating with one another. Although interactive educational options such as Moodle exist, social networking sites are rarely used for academic purposes. Using this medium opens new ways of academically-oriented interactions where faculty could discover more about students' interests, and students, in turn, might express and develop more intellectual facets of their lives. hitherto unknown intellectual facets.  This study also throws up the enormous potential of mobile phones as a tool for “blended learning” in business schools going forward.

Keywords: business school, India, learning, social media, social networking, university

Procedia PDF Downloads 239
254 Quantum Information Scrambling and Quantum Chaos in Silicon-Based Fermi-Hubbard Quantum Dot Arrays

Authors: Nikolaos Petropoulos, Elena Blokhina, Andrii Sokolov, Andrii Semenov, Panagiotis Giounanlis, Xutong Wu, Dmytro Mishagli, Eugene Koskin, Robert Bogdan Staszewski, Dirk Leipold

Abstract:

We investigate entanglement and quantum information scrambling (QIS) by the example of a many-body Extended and spinless effective Fermi-Hubbard Model (EFHM and e-FHM, respectively) that describes a special type of quantum dot array provided by Equal1 labs silicon-based quantum computer. The concept of QIS is used in the framework of quantum information processing by quantum circuits and quantum channels. In general, QIS is manifest as the de-localization of quantum information over the entire quantum system; more compactly, information about the input cannot be obtained by local measurements of the output of the quantum system. In our work, we will first make an introduction to the concept of quantum information scrambling and its connection with the 4-point out-of-time-order (OTO) correlators. In order to have a quantitative measure of QIS we use the tripartite mutual information, in similar lines to previous works, that measures the mutual information between 4 different spacetime partitions of the system and study the Transverse Field Ising (TFI) model; this is used to quantify the dynamical spreading of quantum entanglement and information in the system. Then, we investigate scrambling in the quantum many-body Extended Hubbard Model with external magnetic field Bz and spin-spin coupling J for both uniform and thermal quantum channel inputs and show that it scrambles for specific external tuning parameters (e.g., tunneling amplitudes, on-site potentials, magnetic field). In addition, we compare different Hilbert space sizes (different number of qubits) and show the qualitative and quantitative differences in quantum scrambling as we increase the number of quantum degrees of freedom in the system. Moreover, we find a "scrambling phase transition" for a threshold temperature in the thermal case, that is, the temperature of the model that the channel starts to scramble quantum information. Finally, we make comparisons to the TFI model and highlight the key physical differences between the two systems and mention some future directions of research.

Keywords: condensed matter physics, quantum computing, quantum information theory, quantum physics

Procedia PDF Downloads 68
253 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 287
252 Desing of Woven Fabric with Increased Sound Transmission Loss Property

Authors: U. Gunal, H. I. Turgut, H. Gurler, S. Kaya

Abstract:

There are many ever-increasing and newly emerging problems with rapid population growth in the world. With the increase in people's quality of life in our daily life, acoustic comfort has become an important feature in the textile industry. In order to meet all these expectations in people's comfort areas and survive in challenging competitive conditions in the market without compromising the customer product quality expectations of textile manufacturers, it has become a necessity to bring functionality to the products. It is inevitable to research and develop materials and processes that will bring these functionalities to textile products. The noise we encounter almost everywhere in our daily life, in the street, at home and work, is one of the problems which textile industry is working on. It brings with it many health problems, both mentally and physically. Therefore, noise control studies become more of an issue. Besides, materials used in noise control are not sufficient to reduce the effect of the noise level. The fabrics used in acoustic studies in the textile industry do not show sufficient performance according to their weight and high cost. Thus, acoustic textile products can not be used in daily life. In the thesis study, the attributions used in the noise control and building acoustics studies in the literature were analyzed, and the product with the highest damping value that a textile material will have was designed, manufactured, and tested. Optimum values were obtained by using different material samples that may affect the performance of the acoustic material. Acoustic measurement methods should be applied to verify the acoustic performances shown by the parameters and the designed three-dimensional structure at different values. In the measurements made in the study, the device designed for determining the acoustic performance of the material for both the impedance tube according to the relevant standards and the different noise types in the study was used. In addition, sound records of noise types encountered in daily life are taken and applied to the acoustic absorbent fabric with the aid of the device, and the feasibility of the results and the commercial ability of the product are examined. MATLAB numerical computing programming language and libraries were used in the frequency and sound power analyses made in the study.

Keywords: acoustic, egg crate, fabric, textile

Procedia PDF Downloads 85
251 A Method and System for Secure Authentication Using One Time QR Code

Authors: Divyans Mahansaria

Abstract:

User authentication is an important security measure for protecting confidential data and systems. However, the vulnerability while authenticating into a system has significantly increased. Thus, necessary mechanisms must be deployed during the process of authenticating a user to safeguard him/her from the vulnerable attacks. The proposed solution implements a novel authentication mechanism to counter various forms of security breach attacks including phishing, Trojan horse, replay, key logging, Asterisk logging, shoulder surfing, brute force search and others. QR code (Quick Response Code) is a type of matrix barcode or two-dimensional barcode that can be used for storing URLs, text, images and other information. In the proposed solution, during each new authentication request, a QR code is dynamically generated and presented to the user. A piece of generic information is mapped to plurality of elements and stored within the QR code. The mapping of generic information with plurality of elements, randomizes in each new login, and thus the QR code generated for each new authentication request is for one-time use only. In order to authenticate into the system, the user needs to decode the QR code using any QR code decoding software. The QR code decoding software needs to be installed on handheld mobile devices such as smartphones, personal digital assistant (PDA), etc. On decoding the QR code, the user will be presented a mapping between the generic piece of information and plurality of elements using which the user needs to derive cipher secret information corresponding to his/her actual password. Now, in place of the actual password, the user will use this cipher secret information to authenticate into the system. The authentication terminal will receive the cipher secret information and use a validation engine that will decipher the cipher secret information. If the entered secret information is correct, the user will be provided access to the system. Usability study has been carried out on the proposed solution, and the new authentication mechanism was found to be easy to learn and adapt. Mathematical analysis of the time taken to carry out brute force attack on the proposed solution has been carried out. The result of mathematical analysis showed that the solution is almost completely resistant to brute force attack. Today’s standard methods for authentication are subject to a wide variety of software, hardware, and human attacks. The proposed scheme can be very useful in controlling the various types of authentication related attacks especially in a networked computer environment where the use of username and password for authentication is common.

Keywords: authentication, QR code, cipher / decipher text, one time password, secret information

Procedia PDF Downloads 247
250 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria

Authors: O. O. Aiyelokun, O. A. Agbede

Abstract:

Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.

Keywords: boundary condition, goodness of fit, groundwater, satellite-based data

Procedia PDF Downloads 99
249 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 56
248 Colloids and Heavy Metals in Groundwaters: Tangential Flow Filtration Method for Study of Metal Distribution on Different Sizes of Colloids

Authors: Jiancheng Zheng

Abstract:

When metals are released into water from mining activities, they undergo changes chemically, physically and biologically and then may become more mobile and transportable along the waterway from their original sites. Natural colloids, including both organic and inorganic entities, are naturally occurring in any aquatic environment with sizes in the nanometer range. Natural colloids in a water system play an important role, quite often a key role, in binding and transporting compounds. When assessing and evaluating metals in natural waters, their sources, mobility, fate, and distribution patterns in the system are the major concerns from the point of view of assessing environmental contamination and pollution during resource development. There are a few ways to quantify colloids and accordingly study how metals distribute on different sizes of colloids. Current research results show that the presence of colloids can enhance the transport of some heavy metals in water, while heavy metals may also have an influence on the transport of colloids when cations in the water system change colloids and/or the ion strength of the water system changes. Therefore, studies into the relationship between different sizes of colloids and different metals in a water system are necessary and needed as natural colloids in water systems are complex mixtures of both organic and inorganic as well as biological materials. Their stability could be sensitive to changes in their shapes, phases, hardness and functionalities due to coagulation and deposition et al. and chemical, physical, and biological reactions. Because metal contaminants’ adsorption on surfaces of colloids is closely related to colloid properties, it is desired to fraction water samples as soon as possible after a sample is taken in the natural environment in order to avoid changes to water samples during transportation and storage. For this reason, this study carried out groundwater sample processing in the field, using Prep/Scale tangential flow filtration systems with 3-level cartridges (1 kDa, 10 kDa and 100 kDa). Groundwater samples from seven sites at Fort MacMurray, Alberta, Canada, were fractionated during the 2015 field sampling season. All samples were processed within 3 hours after samples were taken. Preliminary results show that although the distribution pattern of metals on colloids may vary with different samples taken from different sites, some elements often tend to larger colloids (such as Fe and Re), some to finer colloids (such as Sb and Zn), while some of them mainly in the dissolved form (such as Mo and Be). This information is useful to evaluate and project the fate and mobility of different metals in the groundwaters and possibly in environmental water systems.

Keywords: metal, colloid, groundwater, mobility, fractionation, sorption

Procedia PDF Downloads 321
247 The Future Control Rooms for Sustainable Power Systems: Current Landscape and Operational Challenges

Authors: Signe Svensson, Remy Rey, Anna-Lisa Osvalder, Henrik Artman, Lars Nordström

Abstract:

The electric power system is undergoing significant changes. Thereby, the operation and control are becoming partly modified, more multifaceted and automated, and thereby supplementary operator skills might be required. This paper discusses developing operational challenges in future power system control rooms, posed by the evolving landscape of sustainable power systems, driven in turn by the shift towards electrification and renewable energy sources. A literature review followed by interviews and a comparison to other related domains with similar characteristics, a descriptive analysis was performed from a human factors perspective. Analysis is meant to identify trends, relationships, and challenges. A power control domain taxonomy includes a temporal domain (planning and real-time operation) and three operational domains within the power system (generation, switching and balancing). Within each operational domain, there are different control actions, either in the planning stage or in the real-time operation, that affect the overall operation of the power system. In addition to the temporal dimension, the control domains are divided in space between a multitude of different actors distributed across many different locations. A control room is a central location where different types of information are monitored and controlled, alarms are responded to, and deviations are handled by the control room operators. The operators’ competencies, teamwork skills, team shift patterns as well as control system designs are all important factors in ensuring efficient and safe electricity grid management. As the power system evolves with sustainable energy technologies, challenges are found. Questions are raised regarding whether the operators’ tacit knowledge, experience and operation skills of today are sufficient to make constructive decisions to solve modified and new control tasks, especially during disturbed operations or abnormalities. Which new skills need to be developed in planning and real-time operation to provide efficient generation and delivery of energy through the system? How should the user interfaces be developed to assist operators in processing the increasing amount of information? Are some skills at risk of being lost when the systems change? How should the physical environment and collaborations between different stakeholders within and outside the control room develop to support operator control? To conclude, the system change will provide many benefits related to electrification and renewable energy sources, but it is important to address the operators’ challenges with increasing complexity. The control tasks will be modified, and additional operator skills are needed to perform efficient and safe operations. Also, the whole human-technology-organization system needs to be considered, including the physical environment, the technical aids and the information systems, the operators’ physical and mental well-being, as well as the social and organizational systems.

Keywords: operator, process control, energy system, sustainability, future control room, skill

Procedia PDF Downloads 57
246 Raising the Property Provisions of the Topographic Located near the Locality of Gircov, Romania

Authors: Carmen Georgeta Dumitrache

Abstract:

Measurements of terrestrial science aims to study the totality of operations and computing, which are carried out for the purposes of representation on the plan or map of the land surface in a specific cartographic projection and topographic scale. With the development of society, the metrics have evolved, and they land, being dependent on the achievement of a goal-bound utility of economic activity and of a scientific purpose related to determining the form and dimensions of the Earth. For measurements in the field, data processing and proper representation on drawings and maps of planimetry and landform of the land, using topographic and geodesic instruments, calculation and graphical reporting, which requires a knowledge of theoretical and practical concepts from different areas of science and technology. In order to use properly in practice, topographical and geodetic instruments designed to measure precise angles and distances are required knowledge of geometric optics, precision mechanics, the strength of materials, and more. For processing, the results from field measurements are necessary for calculation methods, based on notions of geometry, trigonometry, algebra, mathematical analysis and computer science. To be able to illustrate topographic measurements was established for the lifting of property located near the locality of Gircov, Romania. We determine this total surface of the plan (T30), parcel/plot, but also in the field trace the coordinates of a parcel. The purpose of the removal of the planimetric consisted of: the exact determination of the bounding surface; analytical calculation of the surface; comparing the surface determined with the one registered in the documents produced; drawing up a plan of location and delineation with closeness and distance contour, as well as highlighting the parcels comprising this property; drawing up a plan of location and delineation with closeness and distance contour for a parcel from Dave; in the field trace outline of plot points from the previous point. The ultimate goal of this work was to determine and represent the surface, but also to tear off a plot of the surface total, while respecting the first surface condition imposed by the Act of the beneficiary's property.

Keywords: topography, surface, coordinate, modeling

Procedia PDF Downloads 231
245 Development of a Sprayable Piezoelectric Material for E-Textile Applications

Authors: K. Yang, Y. Wei, M. Zhang, S. Yong, R. Torah, J. Tudor, S. Beeby

Abstract:

E-textiles are traditional textiles with integrated electronic functionality. It is an emerging innovation with numerous applications in fashion, wearable computing, health and safety monitoring, and the military and medical sectors. The piezoelectric effect is a widespread and versatile transduction mechanism used in sensor and actuator applications. Piezoelectric materials produce electric charge when stressed. Conversely, mechanical deformation occurs when an electric field is applied across the material. Lead Zirconate Titanate (PZT) is a widely used piezoceramic material which has been used to fabricate e-textiles through screen printing, electro spinning and hydrothermal synthesis. This paper explores an alternative fabrication process: Spray coating. Spray coating is a straightforward and cost effective fabrication method applicable on both flat and curved surfaces. It can also be applied selectively by spraying through a stencil which enables the required design to be realised on the substrate. This work developed a sprayable PZT based piezoelectric ink consisting of a binder (Fabink-Binder-01), PZT powder (80 % 2 µm and 20 % 0.8 µm) and acetone as a thinner. The optimised weight ratio of PZT/binder is 10:1. The components were mixed using a SpeedMixer DAC 150. The fabrication processes is as follows: 1) Screen print a UV-curable polyurethane interface layer on the textile to create a smooth textile surface. 2) Spray one layer of a conductive silver polymer ink through a pre-designed stencil and dry at 90 °C for 10 minutes to form the bottom electrode. 3) Spray three layers of the PZT ink through a pre-designed stencil and dry at 90 °C for 10 minutes for each layer to form a total thickness of ~250µm PZT layer. 4) Spray one layer of the silver ink through a pre-designed stencil on top of the PZT layer and dry at 90 °C for 10 minutes to form the top electrode. The domains of the PZT elements were aligned by polarising the material at an elevated temperature under a strong electric field. A d33 of 37 pC/N has been achieved after polarising at 90 °C for 6 minutes with an electric field of 3 MV/m. The application of the piezoelectric textile was demonstrated by fabricating a pressure sensor to switch an LED on/off. Other potential applications on e-textiles include motion sensing, energy harvesting, force sensing and a buzzer.

Keywords: piezoelectric, PZT, spray coating, pressure sensor, e-textile

Procedia PDF Downloads 444
244 Human Factors as the Main Reason of the Accident in Scaffold Use Assessment

Authors: Krzysztof J. Czarnocki, E. Czarnocka, K. Szaniawska

Abstract:

Main goal of the research project is Scaffold Use Risk Assessment Model (SURAM) formulation, developed for the assessment of risk levels as a various construction process stages with various work trades. Finally, in 2016, the project received financing by the National Center for Research and development according to PBS3/A2/19/2015–Research Grant. The presented data, calculations and analyzes discussed in this paper were created as a result of the completion on the first and second phase of the PBS3/A2/19/2015 project. Method: One of the arms of the research project is the assessment of worker visual concentration on the sight zones as well as risky visual point inadequate observation. In this part of research, the mobile eye-tracker was used to monitor the worker observation zones. SMI Eye Tracking Glasses is a tool, which allows us to analyze in real time and place where our eyesight is concentrated on and consequently build the map of worker's eyesight concentration during a shift. While the project is still running, currently 64 construction sites have been examined, and more than 600 workers took part in the experiment including monitoring of typical parameters of the work regimen, workload, microclimate, sound vibration, etc. Full equipment can also be useful in more advanced analyses. Because of that technology we have verified not only main focus of workers eyes during work on or next to scaffolding, but we have also examined which changes in the surrounding environment during their shift influenced their concentration. In the result of this study it has been proven that only up to 45.75% of the shift time, workers’ eye concentration was on one of three work-related areas. Workers seem to be distracted by noisy vehicles or people nearby. In opposite to our initial assumptions and other authors’ findings, we observed that the reflective parts of the scaffoldings were not more recognized by workers in their direct workplaces. We have noticed that the red curbs were the only well recognized part on a very few scaffoldings. Surprisingly on numbers of samples, we have not recognized any significant number of concentrations on those curbs. Conclusion: We have found the eye-tracking method useful for the construction of the SURAM model in the risk perception and worker’s behavior sub-modules. We also have found that the initial worker's stress and work visual conditions seem to be more predictive for assessment of the risky developing situation or an accident than other parameters relating to a work environment.

Keywords: accident assessment model, eye tracking, occupational safety, scaffolding

Procedia PDF Downloads 177
243 Assessment of the Change in Strength Properties of Biocomposites Based on PLA and PHA after 4 Years of Storage in a Highly Cooled Condition

Authors: Karolina Mazur, Stanislaw Kuciel

Abstract:

Polylactides (PLA) and polyhydroxyalkanoates (PHA) are the two groups of biodegradable and biocompatible thermoplastic polymers most commonly utilised in medicine and rehabilitation. The aim of this work is to determine the changes in the strength properties and the microstructures taking place in biodegradable polymer composites during their long-term storage in a highly cooled environment (i.e. a freezer at -24ºC) and to initially assess the durability of such biocomposites when used as single-use elements of rehabilitation or medical equipment. It is difficult to find any information relating to the feasibility of long-term storage of technical products made of PLA or PHA, but nonetheless, when using these materials to make products such as casings of hair dryers, laptops or mobile phones, it is safe to assume that without storing in optimal conditions their degradation time might last even several years. SEM images and the assessment of the strength properties (tensile, bending and impact testing) were carried out and the density and water sorption of two polymers, PLA and PHA (NaturePlast PLE 001 and PHE 001), filled with cellulose fibres (corncob grain – Rehofix MK100, Rettenmaier&Sohne) up to 10 and 20% mass were determined. The biocomposites had been stored at a temperature of -24ºC for 4 years. In order to find out the changes in the strength properties and the microstructure taking place after such a long time of storage, the results of the assessment have been compared with the results of the same research carried out 4 years before. Results shows a significant change in the manner of fractures – from ductile with developed surface for the PHA composite with corncob grain when the tensile testing was performed directly after the injection into a more brittle state after 4 years of storage, which is confirmed by the strength tests, where a decrease of deformation is observed at point of fracture. The research showed that there is a way of storing medical devices made out of PLA or PHA for a reasonably long time, as long as the required temperature of storage is met. The decrease of mechanical properties found during tensile testing and bending for PLA was less than 10% of the tensile strength, while the modulus of elasticity and deformation at fracturing slightly rose, which may implicate the beginning of degradation processes. The strength properties of PHA are even higher after 4 years of storage, although in that case the decrease of deformation at fracturing is significant, reaching even 40%, which suggests its degradation rate is higher than that of PLA. The addition of natural particles in both cases only slightly increases the biodegradation.

Keywords: biocomposites, PLA, PHA, storage

Procedia PDF Downloads 242
242 Study of the Hydrodynamic of Electrochemical Ion Pumping for Lithium Recovery

Authors: Maria Sofia Palagonia, Doriano Brogioli, Fabio La Mantia

Abstract:

In the last decade, lithium has become an important raw material in various sectors, in particular for rechargeable batteries. Its production is expected to grow more and more in the future, especially for mobile energy storage and electromobility. Until now it is mostly produced by the evaporation of water from salt lakes, which led to a huge water consumption, a large amount of waste produced and a strong environmental impact. A new, clean and faster electrochemical technique to recover lithium has been recently proposed: electrochemical ion pumping. It consists in capturing lithium ions from a feed solution by intercalation in a lithium-selective material, followed by releasing them into a recovery solution; both steps are driven by the passage of a current. In this work, a new configuration of the electrochemical cell is presented, used to study and optimize the process of the intercalation of lithium ions through the hydrodynamic condition. Lithium Manganese Oxide (LiMn₂O₄) was used as a cathode to intercalate lithium ions selectively during the reduction, while Nickel Hexacyano Ferrate (NiHCF), used as an anode, releases positive ion. The effect of hydrodynamics on the process has been studied by conducting the experiments at various fluxes of the electrolyte through the electrodes, in terms of charge circulated through the cell, captured lithium per unit mass of material and overvoltage. The result shows that flowing the electrolyte inside the cell improves the lithium capture, in particular at low lithium concentration. Indeed, in Atacama feed solution, at 40 mM of lithium, the amount of lithium captured does not increase considerably with the flux of the electrolyte. Instead, when the concentration of the lithium ions is 5 mM, the amount of captured lithium in a single capture cycle increases by increasing the flux, thus leading to the conclusion that the slowest step in the process is the transport of the lithium ion in the liquid phase. Furthermore, an influence of the concentration of other cations in solution on the process performance was observed. In particular, the capturing of the lithium using a different concentration of NaCl together with 5 mM of LiCl was performed, and the results show that the presence of NaCl limits the amount of the captured lithium. Further studies can be performed in order to understand why the full capacity of the material is not reached at the highest flow rate. This is probably due to the porous structure of the material since the liquid phase is likely not affected by the convection flow inside the pores. This work proves that electrochemical ion pumping, with a suitable hydrodynamic design, enables the recovery of lithium from feed solutions at the lower concentration than the sources that are currently exploited, down to 1 mM.

Keywords: desalination battery, electrochemical ion pumping, hydrodynamic, lithium

Procedia PDF Downloads 191
241 Refractory Cardiac Arrest: Do We Go beyond, Do We Increase the Organ Donation Pool or Both?

Authors: Ortega Ivan, De La Plaza Edurne

Abstract:

Background: Spain and other European countries have implemented Uncontrolled Donation after Cardiac Death (uDCD) programs. After 15 years of experience in Spain, many things have changed. Recent evidence and technical breakthroughs achieved in resuscitation are relevant for uDCD programs and raise some ethical concerns related to these protocols. Aim: To rethink current uDCD programs in the light of recent evidence on available therapeutic procedures applicable to victims of out-of-hospital cardiac arrest (OHCA). To address the following question: What is the current standard of treatment owed to victims of OHCA before including them in an uDCD protocol? Materials and Methods: Review of the scientific and ethical literature related to both uDCD programs and innovative resuscitation techniques. Results: 1) The standard of treatment received and the chances of survival of victims of OHCA depend on whether they are classified as Non-Heart Beating Patients (NHBP) or Non-Heart-Beating-Donors (NHBD). 2) Recent studies suggest that NHBPs are likely to survive, with good quality of life, if one or more of the following interventions are performed while ongoing CPR -guided by suspected or known cause of OHCA- is maintained: a) direct access to a Cath Lab-H24 or/and to extra-corporeal life support (ECLS); b) transfer in induced hypothermia from the Emergency Medical Service (EMS) to the ICU; c) thrombolysis treatment; d) mobile extra-corporeal membrane oxygenation (mini ECMO) instituted as a bridge to ICU ECLS devices. 3) Victims of OHCA who cannot benefit from any of these therapies should be considered as NHBDs. Conclusion: Current uDCD protocols do not take into account recent improvements in resuscitation and need to be adapted. Operational criteria to distinguish NHBDs from NHBP should seek a balance between the technical imperative (to do whatever is possible), considerations about expected survival with quality of life, and distributive justice (costs/benefits). Uncontrolled DCD protocols can be performed in a way that does not hamper the legitimate interests of patients, potential organ donors, their families, the organ recipients, and the health professionals involved in these processes. Families of NHBDs’ should receive information which conforms to the ethical principles of respect of autonomy and transparency.

Keywords: uncontrolled donation after cardiac death resuscitation, refractory cardiac arrest, out of hospital cardiac, arrest ethics

Procedia PDF Downloads 211
240 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 76
239 User-Centered Design in the Development of Patient Decision Aids

Authors: Ariane Plaisance, Holly O. Witteman, Patrick Michel Archambault

Abstract:

Upon admission to an intensive care unit (ICU), all patients should discuss their wishes concerning life-sustaining interventions (e.g., cardiopulmonary resuscitation (CPR)). Without such discussions, interventions that prolong life at the cost of decreasing its quality may be used without appropriate guidance from patients. We employed user-centered design to adapt an existing decision aid (DA) about CPR to create a novel wiki-based DA adapted to the context of a single ICU and tailored to individual patient’s risk factors. During Phase 1, we conducted three weeks of ethnography of the decision-making context in our ICU to identify clinician and patient needs for a decision aid. During this time, we observed five dyads of intensivists and patients discussing their wishes concerning life-sustaining interventions. We also conducted semi-structured interviews with the attending intensivists in this ICU. During Phase 2, we conducted three rounds of rapid prototyping involving 15 patients and 11 other allied health professionals. We recorded discussions between intensivists and patients and used a standardized observation grid to collect patients’ comments and sociodemographic data. We applied content analysis to field notes, verbatim transcripts and the completed observation grids. Each round of observations and rapid prototyping iteratively informed the design of the next prototype. We also used the programming architecture of a wiki platform to embed the GO-FAR prediction rule programming code that we linked to a risk graphics software to better illustrate outcome risks calculated. During Phase I, we identified the need to add a section in our DA concerning invasive mechanical ventilation in addition to CPR because both life-sustaining interventions were often discussed together by physicians. During Phase II, we produced a context-adapted decision aid about CPR and mechanical ventilation that includes a values clarification section, questions about the patient’s functional autonomy prior to admission to the ICU and the functional decline that they would judge acceptable upon hospital discharge, risks and benefits of CPR and invasive mechanical ventilation, population-level statistics about CPR, a synthesis section to help patients come to a final decision and an online calculator based on the GO-FAR prediction rule. Even though the three rounds of rapid prototyping led to simplifying the information in our DA, 60% (n= 3/5) of the patients involved in the last cycle still did not understand the purpose of the DA. We also identified gaps in the discussion and documentation of patients’ preferences concerning life-sustaining interventions (e.g.,. CPR, invasive mechanical ventilation). The final version of our DA and our online wiki-based GO-FAR risk calculator using the IconArray.com risk graphics software are available online at www.wikidecision.org and are ready to be adapted to other contexts. Our results inform producers of decision aids on the use of wikis and user-centered design to develop DAs that are better adapted to users’ needs. Further work is needed on the creation of a video version of our DA. Physicians will also need the training to use our DA and to develop shared decision-making skills about goals of care.

Keywords: ethnography, intensive care units, life-sustaining therapies, user-centered design

Procedia PDF Downloads 325
238 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards

Authors: Nadezhda Kvatashidze, Elena Kharabadze

Abstract:

It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.

Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value

Procedia PDF Downloads 293
237 An Observational Study Assessing the Baseline Communication Behaviors among Healthcare Professionals in an Inpatient Setting in Singapore

Authors: Pin Yu Chen, Puay Chuan Lee, Yu Jen Loo, Ju Xia Zhang, Deborah Teo, Jack Wei Chieh Tan, Biauw Chi Ong

Abstract:

Background: Synchronous communication, such as telephone calls, remains the standard communication method between nurses and other healthcare professionals in Singapore public hospitals despite advances in asynchronous technological platforms, such as instant messaging. Although miscommunication is one of the most common causes of lapses in patient care, there is a scarcity of research characterizing baseline inter-professional healthcare communications in a hospital setting due to logistic difficulties. Objective: This study aims to characterize the frequency and patterns of communication behaviours among healthcare professionals. Methods: The one-week observational study was conducted on Monday through Sunday at the nursing station of a cardiovascular medicine and cardiothoracic surgery inpatient ward at the National Heart Centre Singapore. Subjects were shadowed by two physicians for sixteen hours or consecutive morning and afternoon nursing shifts. Communications were logged and characterized by type, duration, caller, and recipient. Results: A total of 1,023 communication events involving the attempted use of the common telephones at the nursing station were logged over a period of one week, corresponding to a frequency of one event every 5.45 minutes (SD 6.98, range 0-56 minutes). Nurses initiated the highest proportion of outbound calls (38.7%) via the nursing station common phone. A total of 179 face-to-face communications (17.5%), 362 inbound calls (35.39%), 481 outbound calls (47.02%), and 1 emergency alert (0.10%) were captured. Average response time for task-oriented communications was 159 minutes (SD 387.6, range 86-231). Approximately 1 in 3 communications captured aimed to clarify patient-related information. The total duration of time spent on synchronous communication events over one week, calculated from total inbound and outbound calls, was estimated to be a total of 7 hours. Conclusion: The results of our study showed that there is a significant amount of time spent on inter-professional healthcare communications via synchronous channels. Integration of patient-related information and use of asynchronous communication channels may help to reduce the redundancy of communications and clarifications. Future studies should explore the use of asynchronous mobile platforms to address the inefficiencies observed in healthcare communications.

Keywords: healthcare communication, healthcare management, nursing, qualitative observational study

Procedia PDF Downloads 188
236 Socioeconomic Burden of a Diagnosis of Cervical Cancer in Women in Rural Uganda: Findings from a Phenomenological Study

Authors: Germans Natuhwera, Peter Ellis, Acuda Wilson, Anne Merriman, Martha Rabwoni

Abstract:

Objective: The aim of the study was to diagnose the socio-economic burden and impact of a diagnosis of cervical cancer (CC) in rural women in the context of low-resourced country Uganda, using a phenomenological enquiry. Methods: This was a multi-site phenomenological inquiry, conducted at three hospice settings; Mobile Hospice Mbarara in southwestern, Little Hospice Hoima in Western, and Hospice Africa Uganda Kampala in central Uganda. A purposive sample of women with a histologically confirmed diagnosis of CC was recruited. Data was collected using open-ended audio-recorded interviews conducted in the native languages of participants. Interviews were transcribed verbatim in English, and Braun and Clarke’s (2019) framework of thematic analysis was used. Results: 13 women with a mean age of 49.2 and age range 29-71 participated in the study. All participants were of low socioeconomic status. The majority (84.6%) had advanced disease at diagnosis. A fuller reading of transcripts produced four major themes clustered under; (1) socioeconomic characteristics of women, (2) impact of CC on women’s relationships, (3) disrupted and impaired activities of daily living (ADLs), and (4) economic disruptions. Conclusions: A diagnosis of CC introduces significant socio-economic disruptions in a woman’s and her family’s life. CC causes disability, impairs the woman and her family’s productivity hence exacerbating levels of poverty in the home. High and expensive out-of-pocket expenditure on treatment, investigations, and transport costs further compound the socio-economic burden. Decentralizing cancer care services to regional centers, scaling up screening services, subsidizing costs of cancer care services, or making cervical cancer care treatment free of charge, strengthening monitoring mechanisms in public facilities to curb the vice of healthcare workers soliciting bribes from patients, increased mass awareness campaigns about cancer, training more healthcare professionals in cancer investigation and management, and palliative care, and introducing an introductory course on gynecologic cancers into all health training institutions are recommended.

Keywords: activities of daily living, cervical cancer, out-of-pocket, expenditure, phenomenology, socioeconomic

Procedia PDF Downloads 180
235 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 145
234 Purification of Bacillus Lipopeptides for Diverse Applications

Authors: Vivek Rangarajan, Kim G. Clarke

Abstract:

Bacillus lipopeptides are biosurfactants with wide ranging applications in the medical, food, agricultural, environmental and cosmetic industries. They are produced as a mix of three families, surfactin, iturin and fengycin, each comprising a large number of homologues of varying functionalities. Consequently, the method and degree of purification of the lipopeptide cocktail becomes particularly important if the functionality of the lipopeptide end-product is to be maximized for the specific application. However, downstream processing of Bacillus lipopeptides is particularly challenging due to the subtle variations observed in the different lipopeptide homologues and isoforms. To date, the most frequently used lipopeptide purification operations have been acid precipitation, solvent extraction, membrane ultrafiltration, adsorption and size exclusion. RP-HPLC (reverse phase high pressure liquid chromatography) also has potential for fractionation of the lipopeptide homologues. In the studies presented here, membrane ultrafiltration and RP-HPLC were evaluated for lipopeptide purification to different degrees of purities for maximum functionality. Batch membrane ultrafiltration using 50 kDa polyether sulphone (PES) membranes resulted in lipopeptide recovery of about 68% for surfactin and 82 % for fengycin. The recovery was further improved to 95% by using size-conditioned lipopeptide micelles. The conditioning of lipopeptides with Ca2+ ions resulted in uniformly sized micelles with average size of 96.4 nm and a polydispersity index of 0.18. The size conditioning also facilitated removal of impurities (molecular weight ranging between 2335-3500 Da) through operation of the system under dia-filtration mode, in a way similar to salt removal from protein by dialysis. The resultant purified lipopeptide was devoid of macromolecular impurities and could ideally suit applications in the cosmetic and food industries. Enhanced purification using RP-HPLC was carried out in an analytical C18 column, with the aim to fractionate lipopeptides into their constituent homologues. The column was eluted with mobile phase comprising acetonitrile and water over an acetonitrile gradient, 35% - 80%, over 70 minutes. The gradient elution program resulted in as many as 41 fractions of individual lipopeptide homologues. The efficacy test of these fractions against fungal phytopathogens showed that first 21 fractions, identified to be homologues of iturins and fengycins, displayed maximum antifungal activities, suitable for biocontrol in the agricultural industry. Thus, in the current study, the downstream processing of lipopeptides leading to tailor-made products for selective applications was demonstrated using two major downstream unit operations.

Keywords: bacillus lipopeptides, membrane ultrafiltration, purification, RP-HPLC

Procedia PDF Downloads 187
233 The United States Film Industry and Its Impact on Latin American Identity Rationalizations

Authors: Alfonso J. García Osuna

Abstract:

Background and Significance: The objective of this paper is to analyze the inception and development of identity archetypes in early XX century Latin America, to explore their roots in United States culture, to discuss the influences that came to bear upon Latin Americans as the United States began to export images of standard identity paradigms through its film industry, and to survey how these images evolved and impacted Latin Americans’ ideas of national distinctiveness from the early 1900s to the present. Therefore, the general hypothesis of this work is that United States film in many ways influenced national identity patterning in its neighbors, especially in those nations closest to its borders, Cuba and Mexico. Very little research has been done on the social impact of the United States film industry on the country’s southern neighbors. From a historical perspective, the US’s influence has been examined as the projection of political and economic power, that is to say, that American influence is seen as a catalyst to align the forces that the US wants to see wield the power of the State. But the subtle yet powerful cultural influence exercised by film, the eminent medium for exporting ideas and ideals in the XX century, has not been significantly explored. Basic Methodologies and Description: Gramscian Marxist theory underpins the study, where it is argued that film, as an exceptional vehicle for culture, is an important site of political and social struggle; in this context, it aims to show how United States capitalist structures of power not only use brute force to generate and maintain control of overseas markets, but also promote their ideas through artistic products such as film in order to infiltrate the popular culture of subordinated peoples. In this same vein, the work of neo-Marxist theoreticians of popular culture is employed in order to contextualize the agency of subordinated peoples in the process of cultural assimilations. Indication of the Major Findings of the Study: The study has yielded much data of interest. The salient finding is that each particular nation receives United States film according to its own particular social and political context, regardless of the amount of pressure exerted upon it. An example of this is the unmistakable dissimilarity between Cuban and Mexican reception of US films. The positive reception given in Cuba to American film has to do with the seamless acceptance of identity paradigms that, for historical reasons discussed herein, were incorporated into the national identity grid quite unproblematically. Such is not the case with Mexico, whose express rejection of identity paradigms offered by the United States reflects not only past conflicts with the northern neighbor, but an enduring recognition of the country’s indigenous roots, one that precluded such paradigms. Concluding Statement: This paper is an endeavor to elucidate the ways in which US film contributed to the outlining of Latin American identity blueprints, offering archetypes that would be accepted or rejected according to each nation’s particular social requirements, constraints and ethnic makeup.

Keywords: film studies, United States, Latin America, identity studies

Procedia PDF Downloads 276
232 Extracting Opinions from Big Data of Indonesian Customer Reviews Using Hadoop MapReduce

Authors: Veronica S. Moertini, Vinsensius Kevin, Gede Karya

Abstract:

Customer reviews have been collected by many kinds of e-commerce websites selling products, services, hotel rooms, tickets and so on. Each website collects its own customer reviews. The reviews can be crawled, collected from those websites and stored as big data. Text analysis techniques can be used to analyze that data to produce summarized information, such as customer opinions. Then, these opinions can be published by independent service provider websites and used to help customers in choosing the most suitable products or services. As the opinions are analyzed from big data of reviews originated from many websites, it is expected that the results are more trusted and accurate. Indonesian customers write reviews in Indonesian language, which comes with its own structures and uniqueness. We found that most of the reviews are expressed with “daily language”, which is informal, do not follow the correct grammar, have many abbreviations and slangs or non-formal words. Hadoop is an emerging platform aimed for storing and analyzing big data in distributed systems. A Hadoop cluster consists of master and slave nodes/computers operated in a network. Hadoop comes with distributed file system (HDFS) and MapReduce framework for supporting parallel computation. However, MapReduce has weakness (i.e. inefficient) for iterative computations, specifically, the cost of reading/writing data (I/O cost) is high. Given this fact, we conclude that MapReduce function is best adapted for “one-pass” computation. In this research, we develop an efficient technique for extracting or mining opinions from big data of Indonesian reviews, which is based on MapReduce with one-pass computation. In designing the algorithm, we avoid iterative computation and instead adopt a “look up table” technique. The stages of the proposed technique are: (1) Crawling the data reviews from websites; (2) cleaning and finding root words from the raw reviews; (3) computing the frequency of the meaningful opinion words; (4) analyzing customers sentiments towards defined objects. The experiments for evaluating the performance of the technique were conducted on a Hadoop cluster with 14 slave nodes. The results show that the proposed technique (stage 2 to 4) discovers useful opinions, is capable of processing big data efficiently and scalable.

Keywords: big data analysis, Hadoop MapReduce, analyzing text data, mining Indonesian reviews

Procedia PDF Downloads 183
231 The Significance of Picture Mining in the Fashion and Design as a New Research Method

Authors: Katsue Edo, Yu Hiroi

Abstract:

T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.

Keywords: empirical research, fashion and design, Picture Mining, qualitative research

Procedia PDF Downloads 339