Search results for: environmental knowledge system
588 Nonconventional Method for Separation of Rosmarinic Acid: Synergic Extraction
Authors: Lenuta Kloetzer, Alexandra C. Blaga, Dan Cascaval, Alexandra Tucaliuc, Anca I. Galaction
Abstract:
Rosmarinic acid, an ester of caffeic acid and 3-(3,4-dihydroxyphenyl) lactic acid, is considered a valuable compound for the pharmaceutical and cosmetic industries due to its antimicrobial, antioxidant, antiviral, anti-allergic, and anti-inflammatory effects. It can be obtained by extraction from vegetable or animal materials, by chemical synthesis and biosynthesis. Indifferent of the method used for rosmarinic acid production, the separation and purification process implies high amount of raw materials and laborious stages leading to high cost for and limitations of the separation technology. This study focused on separation of rosmarinic acid by synergic reactive extraction with a mixture of two extractants, one acidic (acid di-(2ethylhexyl) phosphoric acid, D2EHPA) and one with basic character (Amberlite LA-2). The studies were performed in experimental equipment consisting of an extraction column where the phases’ mixing was made by mean of a perforated disk with 45 mm diameter and 20% free section, maintained at the initial contact interface between the aqueous and organic phases. The vibrations had a frequency of 50 s⁻¹ and 5 mm amplitude. The extraction was carried out in two solvents with different dielectric constants (n-heptane and dichloromethane) in which the extractants mixture of varying concentration was dissolved. The pH-value of initial aqueous solution was varied between 1 and 7. The efficiency of the studied extraction systems was quantified by distribution and synergic coefficients. For calculating these parameters, the rosmarinic acid concentration in the initial aqueous solution and in the raffinate have been measured by HPLC. The influences of extractants concentrations and solvent polarity on the efficiency of rosmarinic acid separation by synergic extraction with a mixture of Amberlite LA-2 and D2EHPA have been analyzed. In the reactive extraction system with a constant concentration of Amberlite LA-2 in the organic phase, the increase of D2EHPA concentration leads to decrease of the synergic coefficient. This is because the increase of D2EHPA concentration prevents the formation of amine adducts and, consequently, affects the hydrophobicity of the interfacial complex with rosmarinic acid. For these reasons, the diminution of synergic coefficient is more important for dichloromethane. By maintaining a constant value of D2EHPA concentration and increasing the concentration of Amberlite LA-2, the synergic coefficient could become higher than 1, its highest values being reached for n-heptane. Depending on the solvent polarity and D2EHPA amount in the solvent phase, the synergic effect is observed for Amberlite LA-2 concentrations over 20 g/l dissolved in n-heptane. Thus, by increasing the concentration of D2EHPA from 5 to 40 g/l, the minimum concentration value of Amberlite LA-2 corresponding to synergism increases from 20 to 40 g/l for the solvent with lower polarity, namely, n-heptane, while there is no synergic effect recorded for dichloromethane. By analysing the influences of the main factors (organic phase polarity, extractant concentration in the mixture) on the efficiency of synergic extraction of rosmarinic acid, the most important synergic effect was found to correspond to the extractants mixture containing 5 g/l D2EHPA and 40 g/l Amberlite LA-2 dissolved in n-heptane.Keywords: Amberlite LA-2, di(2-ethylhexyl) phosphoric acid, rosmarinic acid, synergic effect
Procedia PDF Downloads 290587 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models
Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg
Abstract:
Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction
Procedia PDF Downloads 309586 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India
Authors: Vinu Elias Jacob, Manoj Kumar Kini
Abstract:
Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.Keywords: disaster management, resilience, spatial planning, spatial transformations
Procedia PDF Downloads 296585 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements
Authors: Mohammad R. Bhuyan, Mohammad J. Khattak
Abstract:
Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement
Procedia PDF Downloads 166584 Using Balanced Scorecard Performance Metrics in Gauging the Delivery of Stakeholder Value in Higher Education: the Assimilation of Industry Certifications within a Business Program Curriculum
Authors: Thomas J. Bell III
Abstract:
This paper explores the value of assimilating certification training within a traditional course curriculum. This innovative approach is believed to increase stakeholder value within the Computer Information System program at Texas Wesleyan University. Stakeholder value is obtained from increased job marketability and critical thinking skills that create employment-ready graduates. This paper views value as first developing the capability to earn an industry-recognized certification, which provides the student with more job placement compatibility while allowing the use of critical thinking skills in a liberal arts business program. Graduates with industry-based credentials are often given preference in the hiring process, particularly in the information technology sector. And without a pioneering curriculum that better prepares students for an ever-changing employment market, its educational value is dubiously questioned. Since certifications are trending in the hiring process, academic programs should explore the viability of incorporating certification training into teaching pedagogy and courses curriculum. This study will examine the use of the balanced scorecard across four performance dimensions (financial, customer, internal process, and innovation) to measure the stakeholder value of certification training within a traditional course curriculum. The balanced scorecard as a strategic management tool may provide insight for leveraging resource prioritization and decisions needed to achieve various curriculum objectives and long-term value while meeting multiple stakeholders' needs, such as students, universities, faculty, and administrators. The research methodology will consist of quantitative analysis that includes (1) surveying over one-hundred students in the CIS program to learn what factor(s) contributed to their certification exam success or failure, (2) interviewing representatives from the Texas Workforce Commission to identify the employment needs and trends in the North Texas (Dallas/Fort Worth) area, (3) reviewing notable Workforce Innovation and Opportunity Act publications on training trends across several local business sectors, and (4) analyzing control variables to identify specific correlations between industry alignment and job placement to determine if a correlation exists. These findings may provide helpful insight into impactful pedagogical teaching techniques and curriculum that positively contribute to certification credentialing success. And should these industry-certified students land industry-related jobs that correlate with their certification credential value, arguably, stakeholder value has been realized.Keywords: certification exam teaching pedagogy, exam preparation, testing techniques, exam study tips, passing certification exams, embedding industry certification and curriculum alignment, balanced scorecard performance evaluation
Procedia PDF Downloads 108583 Experiment on Artificial Recharge of Groundwater Implemented Project: Effect on the Infiltration Velocity by Vegetation Mulch
Authors: Cheh-Shyh Ting, Jiin-Liang Lin
Abstract:
This study was conducted at the Wanglung Farm in Pingtung County to test the groundwater seepage influences on the implemented project for artificial groundwater recharge. The study was divided into three phases. The first phase, conducted on natural groundwater that was recharged through the local climate and growing conditions, observed the natural form of vegetation species. The original plants were flooded, and after 60 days it was observed that of the original plants only Goosegrass (Eleusine indica) and Black heart (Polygonum lapathifolium Linn.) remained. Direct infiltration tests were carried out, and calculations for the effect of vegetation on infiltration velocity of the recharge pool were noted. The second phase was an indoor test. Bahia grass and wild amaranth were selected as vegetation roots. After growth, the distribution of different grassroots was observed in order to facilitate a comparison permeability coefficient calculated by the amount of penetration and to explore the relationship between density and the efficiency to groundwater recharge. The third phase was the root tomography analysis, further observation of the development of plant roots using computed tomography technology. Computed Tomography, also known as (CT), is a diagnostic imaging examination, normally used in the medical field. In the first phase of the feasibility study, most non-aquatic plants wilted and died within seven days. In seven days, the remaining plants were used for experimental infiltration analysis. Results showed that in eight hours of infiltration test, Eleusine indica stems averaged 0.466 m/day and wild amaranth averaged 0.014 m/day. The second phase of the experiment was conducted on the remains of the plant a week in it had died and rotted, and the infiltration experiment was performed under these conditions. The results showed eight hours in end of the infiltration test, Eleusine indica stems averaged 0.033 m/day, and wild amaranth averaged 0.098 m/day. Non-aquatic plants died within two weeks, and their rotted remains clogged the pores of bottom soil particles, causing obstruction of recharge pool infiltration. Experiment results showed that eight hours in the test the average infiltration velocity for Eleusine indica stems was 0.0229 m/day and wild amaranth averaged 0.0117 m/day. Since the rotted roots of the plants blocked the pores of the soil in the recharge pool, which resulted in the obstruction of the artificial infiltration pond and showed an immediate impact on recharge efficiency. In order to observe the development of plant roots, the third phase used computed tomography imaging. Iodine developer was injected into the Black heart, allowing its cross-sectional images to be shown on CT and to be used to observe root development.Keywords: artificial recharge of groundwater, computed tomography, infiltration velocity, vegetation root system
Procedia PDF Downloads 310582 Three Dimensional Computational Fluid Dynamics Simulation of Wall Condensation inside Inclined Tubes
Authors: Amirhosein Moonesi Shabestary, Eckhard Krepper, Dirk Lucas
Abstract:
The current PhD project comprises CFD-modeling and simulation of condensation and heat transfer inside horizontal pipes. Condensation plays an important role in emergency cooling systems of reactors. The emergency cooling system consists of inclined horizontal pipes which are immersed in a tank of subcooled water. In the case of an accident the water level in the core is decreasing, steam comes in the emergency pipes, and due to the subcooled water around the pipe, this steam will start to condense. These horizontal pipes act as a strong heat sink which is responsible for a quick depressurization of the reactor core when any accident happens. This project is defined in order to model all these processes which happening in the emergency cooling systems. The most focus of the project is on detection of different morphologies such as annular flow, stratified flow, slug flow and plug flow. This project is an ongoing project which has been started 1 year ago in Helmholtz Zentrum Dresden Rossendorf (HZDR), Fluid Dynamics department. In HZDR most in cooperation with ANSYS different models are developed for modeling multiphase flows. Inhomogeneous MUSIG model considers the bubble size distribution and is used for modeling small-scaled dispersed gas phase. AIAD (Algebraic Interfacial Area Density Model) is developed for detection of the local morphology and corresponding switch between them. The recent model is GENTOP combines both concepts. GENTOP is able to simulate co-existing large-scaled (continuous) and small-scaled (polydispersed) structures. All these models are validated for adiabatic cases without any phase change. Therefore, the start point of the current PhD project is using the available models and trying to integrate phase transition and wall condensing models into them. In order to simplify the idea of condensation inside horizontal tubes, 3 steps have been defined. The first step is the investigation of condensation inside a horizontal tube by considering only direct contact condensation (DCC) and neglect wall condensation. Therefore, the inlet of the pipe is considered to be annular flow. In this step, AIAD model is used in order to detect the interface. The second step is the extension of the model to consider wall condensation as well which is closer to the reality. In this step, the inlet is pure steam, and due to the wall condensation, a liquid film occurs near the wall which leads to annular flow. The last step will be modeling of different morphologies which are occurring inside the tube during the condensation via using GENTOP model. By using GENTOP, the dispersed phase is able to be considered and simulated. Finally, the results of the simulations will be validated by experimental data which will be available also in HZDR.Keywords: wall condensation, direct contact condensation, AIAD model, morphology detection
Procedia PDF Downloads 305581 Exploration Tools for Tantalum-Bearing Pegmatites along Kibara Belt, Central and Southwestern Uganda
Authors: Sadat Sembatya
Abstract:
Tantalum metal is used in addressing capacitance challenge in the 21st-century technology growth. Tantalum is rarely found in its elemental form. Hence it’s often found with niobium and the radioactive elements of thorium and uranium. Industrial processes are required to extract pure tantalum. Its deposits are mainly oxide associated and exist in Ta-Nb oxides such as tapiolite, wodginite, ixiolite, rutile and pyrochlore-supergroup minerals are of minor importance. The stability and chemical inertness of tantalum makes it a valuable substance for laboratory equipment and a substitute for platinum. Each period of Tantalum ore formation is characterized by specific mineralogical and geochemical features. Compositions of Columbite-Group Minerals (CGM) are variable: Fe-rich types predominate in the Man Shield (Sierra Leone), the Congo Craton (DR Congo), the Kamativi Belt (Zimbabwe) and the Jos Plateau (Nigeria). Mn-rich columbite-tantalite is typical of the Alto Ligonha Province (Mozambique), the Arabian-Nubian Shield (Egypt, Ethiopia) and the Tantalite Valley pegmatites (southern Namibia). There are large compositional variations through Fe-Mn fractionation, followed by Nb-Ta fractionation. These are typical for pegmatites usually associated with very coarse quartz-feldspar-mica granites. They are young granitic systems of the Kibara Belt of Central Africa and the Older Granites of Nigeria. Unlike ‘simple’ Be-pegmatites, most Ta-Nb rich pegmatites have the most complex zoning. Hence we need systematic exploration tools to find and rapidly assess the potential of different pegmatites. The pegmatites exist as known deposits (e.g., abandoned mines) and the exposed or buried pegmatites. We investigate rocks and minerals to trace for the possibility of the effect of hydrothermal alteration mainly for exposed pegmatites, do mineralogical study to prove evidence of gradual replacement and geochemistry to report the availability of trace elements which are good indicators of mineralisation. Pegmatites are not good geophysical responders resulting to the exclusion of the geophysics option. As for more advanced prospecting, we bulk samples from different zones first to establish their grades and characteristics, then make a pilot test plant because of big samples to aid in the quantitative characterization of zones, and then drill to reveal distribution and extent of different zones but not necessarily grade due to nugget effect. Rapid assessment tools are needed to assess grade and degree of fractionation in order to ‘rule in’ or ‘rule out’ a given pegmatite for future work. Pegmatite exploration is also unique, high risk and expensive hence right traceability system and certification for 3Ts are highly needed.Keywords: exploration, mineralogy, pegmatites, tantalum
Procedia PDF Downloads 150580 Regional Dynamics of Innovation and Entrepreneurship in the Optics and Photonics Industry
Authors: Mustafa İlhan Akbaş, Özlem Garibay, Ivan Garibay
Abstract:
The economic entities in innovation ecosystems form various industry clusters, in which they compete and cooperate to survive and grow. Within a successful and stable industry cluster, the entities acquire different roles that complement each other in the system. The universities and research centers have been accepted to have a critical role in these systems for the creation and development of innovations. However, the real effect of research institutions on regional economic growth is difficult to assess. In this paper, we present our approach for the identification of the impact of research activities on the regional entrepreneurship for a specific high-tech industry: optics and photonics. The optics and photonics has been defined as an enabling industry, which combines the high-tech photonics technology with the developing optics industry. The recent literature suggests that the growth of optics and photonics firms depends on three important factors: the embedded regional specializations in the labor market, the research and development infrastructure, and a dynamic small firm network capable of absorbing new technologies, products and processes. Therefore, the role of each factor and the dynamics among them must be understood to identify the requirements of the entrepreneurship activities in optics and photonics industry. There are three main contributions of our approach. The recent studies show that the innovation in optics and photonics industry is mostly located around metropolitan areas. There are also studies mentioning the importance of research center locations and universities in the regional development of optics and photonics industry. These studies are mostly limited with the number of patents received within a short period of time or some limited survey results. Therefore the first contribution of our approach is conducting a comprehensive analysis for the state and recent history of the photonics and optics research in the US. For this purpose, both the research centers specialized in optics and photonics and the related research groups in various departments of institutions (e.g. Electrical Engineering, Materials Science) are identified and a geographical study of their locations is presented. The second contribution of the paper is the analysis of regional entrepreneurship activities in optics and photonics in recent years. We use the membership data of the International Society for Optics and Photonics (SPIE) and the regional photonics clusters to identify the optics and photonics companies in the US. Then the profiles and activities of these companies are gathered by extracting and integrating the related data from the National Establishment Time Series (NETS) database, ES-202 database and the data sets from the regional photonics clusters. The number of start-ups, their employee numbers and sales are some examples of the extracted data for the industry. Our third contribution is the utilization of collected data to investigate the impact of research institutions on the regional optics and photonics industry growth and entrepreneurship. In this analysis, the regional and periodical conditions of the overall market are taken into consideration while discovering and quantifying the statistical correlations.Keywords: entrepreneurship, industrial clusters, optics, photonics, emerging industries, research centers
Procedia PDF Downloads 407579 Guests’ Satisfaction and Intention to Revisit Smart Hotels: Qualitative Interviews Approach
Authors: Raymond Chi Fai Si Tou, Jacey Ja Young Choe, Amy Siu Ian So
Abstract:
Smart hotels can be defined as the hotel which has an intelligent system, through digitalization and networking which achieve hotel management and service information. In addition, smart hotels include high-end designs that integrate information and communication technology with hotel management fulfilling the guests’ needs and improving the quality, efficiency and satisfaction of hotel management. The purpose of this study is to identify appropriate factors that may influence guests’ satisfaction and intention to revisit Smart Hotels based on service quality measurement of lodging quality index and extended UTAUT theory. Unified Theory of Acceptance and Use of Technology (UTAUT) is adopted as a framework to explain technology acceptance and use. Since smart hotels are technology-based infrastructure hotels, UTATU theory could be as the theoretical background to examine the guests’ acceptance and use after staying in smart hotels. The UTAUT identifies four key drivers of the adoption of information systems: performance expectancy, effort expectancy, social influence, and facilitating conditions. The extended UTAUT modifies the definitions of the seven constructs for consideration; the four previously cited constructs of the UTAUT model together with three new additional constructs, which including hedonic motivation, price value and habit. Thus, the seven constructs from the extended UTAUT theory could be adopted to understand their intention to revisit smart hotels. The service quality model will also be adopted and integrated into the framework to understand the guests’ intention of smart hotels. There are rare studies to examine the service quality on guests’ satisfaction and intention to revisit in smart hotels. In this study, Lodging Quality Index (LQI) will be adopted to measure the service quality in smart hotels. Using integrated UTAUT theory and service quality model because technological applications and services require using more than one model to understand the complicated situation for customers’ acceptance of new technology. Moreover, an integrated model could provide more perspective insights to explain the relationships of the constructs that could not be obtained from only one model. For this research, ten in-depth interviews are planned to recruit this study. In order to confirm the applicability of the proposed framework and gain an overview of the guest experience of smart hotels from the hospitality industry, in-depth interviews with the hotel guests and industry practitioners will be accomplished. In terms of the theoretical contribution, it predicts that the integrated models from the UTAUT theory and the service quality will provide new insights to understand factors that influence the guests’ satisfaction and intention to revisit smart hotels. After this study identifies influential factors, smart hotel practitioners could understand which factors may significantly influence smart hotel guests’ satisfaction and intention to revisit. In addition, smart hotel practitioners could also provide outstanding guests experience by improving their service quality based on the identified dimensions from the service quality measurement. Thus, it will be beneficial to the sustainability of the smart hotels business.Keywords: intention to revisit, guest satisfaction, qualitative interviews, smart hotels
Procedia PDF Downloads 208578 A Protocol of Procedures and Interventions to Accelerate Post-Earthquake Reconstruction
Authors: Maria Angela Bedini, Fabio Bronzini
Abstract:
The Italian experiences, positive and negative, of the post-earthquake are conditioned by long times and structural bureaucratic constraints, also motivated by the attempt to contain mafia infiltration and corruption. The transition from the operational phase of the emergency to the planning phase of the reconstruction project is thus hampered by a series of inefficiencies and delays, incompatible with the need for rapid recovery of the territories in crisis. In fact, intervening in areas affected by seismic events means at the same time associating the reconstruction plan with an urban and territorial rehabilitation project based on strategies and tools in which prevention and safety play a leading role in the regeneration of territories in crisis and the return of the population. On the contrary, the earthquakes that took place in Italy have instead further deprived the territories affected of the minimum requirements for habitability, in terms of accessibility and services, accentuating the depopulation process, already underway before the earthquake. The objective of this work is to address with implementing and programmatic tools the procedures and strategies to be put in place, today and in the future, in Italy and abroad, to face the challenge of the reconstruction of activities, sociality, services, risk mitigation: a protocol of operational intentions and firm points, open to a continuous updating and implementation. The methodology followed is that of the comparison in a synthetic form between the different Italian experiences of the post-earthquake, based on facts and not on intentions, to highlight elements of excellence or, on the contrary, damage. The main results obtained can be summarized in technical comparison cards on good and bad practices. With this comparison, we intend to make a concrete contribution to the reconstruction process, certainly not only related to the reconstruction of buildings but privileging the primary social and economic needs. In this context, the recent instrument applied in Italy of the strategic urban and territorial SUM (Minimal Urban Structure) and the strategic monitoring process become dynamic tools for supporting reconstruction. The conclusions establish, by points, a protocol of interventions, the priorities for integrated socio-economic strategies, multisectoral and multicultural, and highlight the innovative aspects of 'inversion' of priorities in the reconstruction process, favoring the take-off of 'accelerator' interventions social and economic and a more updated system of coexistence with risks. In this perspective, reconstruction as a necessary response to the calamitous event can and must become a unique opportunity to raise the level of protection from risks and rehabilitation and development of the most fragile places in Italy and abroad.Keywords: an operational protocol for reconstruction, operational priorities for coexistence with seismic risk, social and economic interventions accelerators of building reconstruction, the difficult post-earthquake reconstruction in Italy
Procedia PDF Downloads 127577 Analysis of the Blastocysts Chromosomal Set Obtained after the Use of Donor Oocyte Cytoplasmic Transfer Technology
Authors: Julia Gontar, Natalia Buderatskaya, Igor Ilyin, Olga Parnitskaya, Sergey Lavrynenko, Eduard Kapustin, Ekaterina Ilyina, Yana Lakhno
Abstract:
Introduction: It is well known that oocytes obtained from older reproductive women have accumulated mitochondrial DNA mutations, which negatively affects the morphology of a developing embryo and may lead to the birth of a child with mitochondrial disease. Special techniques have been developed to allow a donor oocyte cytoplasmic transfer with the parents’ biological nuclear DNA retention. At the same time, it is important to understand whether the procedure affects the future embryonic chromosome sets as the nuclear DNA is the transfer subject in this new complex procedure. Material and Methods: From July 2015 to July 2016, the investigation was carried out in the Medical Centre IGR. 34 donor oocytes (group A) were used for the manipulation with the aim of donating cytoplasm: 21 oocytes were used for zygotes pronuclear transfer and oocytes 13 – for the spindle transfer. The mean age of the oocyte donors was 28.4±2.9 years. The procedure was performed using Nikon Ti Eclipse inverted microscope equipped with the micromanipulators Narishige system (Japan), Saturn 3 laser console (UK), Oosight imaging systems (USA). For the preimplantation genetic screening (PGS) blastocyst biopsy was performed, trophectoderm samples were diagnosed using fluorescent in situ hybridization on chromosomes 9, 13, 15, 16, 17, 18, 21, 22, X, Y. For comparison of morphological characteristics and euploidy, was chosen a group of embryos (group B) with the amount of 121 blastocysts obtained from 213 oocytes, which were gotten from the donor programs of assisted reproductive technologies (ART). Group B was not subjected to donor oocyte cytoplasmic transfer procedure and studied on the above mentioned chromosomes. Statistical analysis was carried out using the criteria t, x^2 at a significance levels p<0.05, p<0.01, p<0.001. Results: After the donor cytoplasm transfer process the amount of the third day developing embryos was 27 (79.4%). In this stage, the group B consisted of 189 (88.7%) developing embryos, and there was no statistically significant difference (SSD) between the two groups (p>0.05). After a comparative analysis of the morphological characteristics of the embryos on the fifth day, we also found no SSD among the studied groups (p>0.05): from 34 oocytes exposed to manipulation, 14 (41.2%) blastocysts was obtained, while the group B blastocyst yield was 56.8% (n=121) from 213 oocytes. The following results were obtained after PGS performing: in group A euploidy in studied chromosomes were 28.6%(n=4) blastocysts, whereas in group B this rate was 40.5%(n=49), 28.6%(n=4) and 21.5%(n=26) of mosaic embryos and 42.8%(n=6) and 38.0%(n=46) aneuploid blastocysts respectively were identified. None of these specified parameters had an SSD (p>0.05). But attention was drawn by the blastocysts in group A with identified mosaicism, which was chaotic without any cell having euploid chromosomal set, in contrast to the mosaic embryos in group B where identified chaotic mosaicism was only 2.5%(n=3). Conclusions: According to the obtained results, there is no direct procedural effect on the chromosome in embryos obtained following donor oocyte cytoplasmic transfer. Thus, the technology introduction will enhance the infertility treating effectiveness as well as avoiding having a child with mitochondrial disease.Keywords: donor oocyte cytoplasmic transfer, embryos’ chromosome set, oocyte spindle transfer, pronuclear transfer
Procedia PDF Downloads 328576 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services
Authors: Giada Feletti, Daniela Tedesco, Paolo Trucco
Abstract:
The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning the indicators currently used for the performance measurement of EMS systems. From this literature analysis, it emerged that current studies focus on two distinct perspectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). The perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal focuses on the entire healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid to the dependency of decisions that in current EMS management models tend to be neglected or underestimated. In particular, the integration of the two processes enables the evaluation of the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in the extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating the ones firstly not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draws us to exclude additional indicators due to the unavailability of data required for their computation. The final dashboard, which was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness of EDs accessibility in real-time. By associating each KPI to the EMS phase it refers to, it was also possible to design a well-balanced dashboard covering both efficiency and effective performance of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient’s care are covered by traditional KPIs compared to the subsequent phases taking place in the hospital ED. This could be taken into consideration for the potential future development of the dashboard. Moreover, the research could proceed by building a multi-layer dashboard composed of the first level with a minimal set of KPIs to measure the basic performance of the EMS system at an aggregate level and further levels with KPIs that can bring additional and more detailed information.Keywords: dashboard, decision support, emergency medical services, key performance indicators
Procedia PDF Downloads 113575 Waste Analysis and Classification Study (WACS) in Ecotourism Sites of Samal Island, Philippines Towards a Circular Economy Perspective
Authors: Reeden Bicomong
Abstract:
Ecotourism activities, though geared towards conservation efforts, still put pressures against the natural state of the environment. Influx of visitors that goes beyond carrying capacity of the ecotourism site, the wastes generated, greenhouse gas emissions, are just few of the potential negative impacts of a not well-managed ecotourism activities. According to Girard and Nocca (2017) tourism produces many negative impacts because it is configured according to the model of linear economy, operating on a linear model of take, make and dispose (Ellen MacArthur Foundation 2015). With the influx of tourists in an ecotourism area, more wastes are generated, and if unregulated, natural state of the environment will be at risk. It is in this light that a study on waste analysis and classification study in five different ecotourism sites of Samal Island, Philippines was conducted. The major objective of the study was to analyze the amount and content of wastes generated from ecotourism sites in Samal Island, Philippines and make recommendations based on the circular economy perspective. Five ecotourism sites in Samal Island, Philippines was identified such as Hagimit Falls, Sanipaan Vanishing Shoal, Taklobo Giant Clams, Monfort Bat Cave, and Tagbaobo Community Based Ecotourism. Ocular inspection of each ecotourism site was conducted. Likewise, key informant interview of ecotourism operators and staff was done. Wastes generated from these ecotourism sites were analyzed and characterized to come up with recommendations that are based on the concept of circular economy. Wastes generated were classified into biodegradables, recyclables, residuals and special wastes. Regression analysis was conducted to determine if increase in number of visitors would equate to increase in the amount of wastes generated. Ocular inspection indicated that all of the five ecotourism sites have their own system of waste collection. All of the sites inspected were found to be conducting waste separation at source since there are different types of garbage bins for all of the four classification of wastes such as biodegradables, recyclables, residuals and special wastes. Furthermore, all five ecotourism sites practice composting of biodegradable wastes and recycling of recyclables. Therefore, only residuals are being collected by the municipal waste collectors. Key informant interview revealed that all five ecotourism sites offer mostly nature based activities such as swimming, diving, site seeing, bat watching, rice farming experiences and community living. Among the five ecotourism sites, Sanipaan Vanishing Shoal has the highest average number of visitors in a weekly basis. At the same time, in the wastes assessment study conducted, Sanipaan has the highest amount of wastes generated. Further results of wastes analysis revealed that biodegradables constitute majority of the wastes generated in all of the five selected ecotourism sites. Meanwhile, special wastes proved to be the least generated as there was no amount of this type was observed during the three consecutive weeks WACS was conducted.Keywords: Circular economy, ecotourism, sustainable development, WACS
Procedia PDF Downloads 221574 Timely Screening for Palliative Needs in Ambulatory Oncology
Authors: Jaci Mastrandrea
Abstract:
Background: The National Comprehensive Cancer Network (NCCN) recommends that healthcare institutions have established processes for integrating palliative care (PC) into cancer treatment and that all cancer patients be screened for PC needs upon initial diagnosis as well as throughout the entire continuum of care (National Comprehensive Cancer Network, 2021). Early PC screening is directly correlated with improved patient outcomes. The Sky Lakes Cancer Treatment Center (SLCTC) is an institution that has access to PC services yet does not have protocols in place for identifying patients with palliative needs or a standardized referral process. The aim of this quality improvement project is to improve early access to PC services by establishing a standardized screening and referral process for outpatient oncology patients. Method: The sample population included all adult patients with an oncology diagnosis who presented to the SLCTC for treatment during the project timeline from March 15th, 2022, to April 29th, 2022. The “Palliative and Supportive Needs Assessment'' (PSNA) screening tool was developed from validated and evidence-based PC referral criteria. The tool was initially implemented using paper forms and later was integrated into the Epic-Beacon EHR system. Patients were screened by registered nurses on the SLCTC treatment team. Nurses responsible for screening patients received an educational inservice prior to implementation. Patients with a PSNA score of three or higher were considered to be a positive screen. Scores of five or higher triggered a PC referral order in the patient’s EHR for the oncologist to review and approve. All patients with a positive screen received an educational handout on the topic of PC, and the EHR was flagged for follow-up. Results: Prior to implementation of the PSCNA screening tool, the SLCTC had zero referrals to PC in the past year, excluding referrals to hospice. Data was collected from the first 100 patient screenings completed within the eight-week data collection period. Seventy-three percent of patients met criteria for PC referral with a score greater than or equal to three. Of those patients who met referral criteria, 53.4% (39 patients) were referred for a palliative and supportive care consultation. Patients that were not referred to PC upon meeting the criteria were flagged in the EHR for re-screening within one to three months. Patients with lung cancer, chronic hematologic malignancies, breast cancer, and gastrointestinal malignancy most frequently met criteria for PC referral and scored highest overall on the scale of 0-12. Conclusion: The implementation of a standardized PC screening tool at the SLCTC significantly increased awareness of PC needs among cancer patients in the outpatient setting. Additionally, data derived from this quality improvement project supports the national recommendation for PC to be an integral component of cancer treatment across the entire continuum of care.Keywords: oncology, palliative care, symptom management, symptom screening, ambulatory oncology, cancer, supportive care
Procedia PDF Downloads 76573 Rotary Machine Sealing Oscillation Frequencies and Phase Shift Analysis
Authors: Liliia N. Butymova, Vladimir Ya Modorskii
Abstract:
To ensure the gas transmittal GCU's efficient operation, leakages through the labyrinth packings (LP) should be minimized. Leakages can be minimized by decreasing the LP gap, which in turn depends on thermal processes and possible rotor vibrations and is designed to ensure absence of mechanical contact. Vibration mitigation allows to minimize the LP gap. It is advantageous to research influence of processes in the dynamic gas-structure system on LP vibrations. This paper considers influence of rotor vibrations on LP gas dynamics and influence of the latter on the rotor structure within the FSI unidirectional dynamical coupled problem. Dependences of nonstationary parameters of gas-dynamic process in LP on rotor vibrations under various gas speeds and pressures, shaft rotation speeds and vibration amplitudes, and working medium features were studied. The programmed multi-processor ANSYS CFX was chosen as a numerical computation tool. The problem was solved using PNRPU high-capacity computer complex. Deformed shaft vibrations are replaced with an unyielding profile that moves in the fixed annulus "up-and-down" according to set harmonic rule. This solves a nonstationary gas-dynamic problem and determines time dependence of total gas-dynamic force value influencing the shaft. Pressure increase from 0.1 to 10 MPa causes growth of gas-dynamic force oscillation amplitude and frequency. The phase shift angle between gas-dynamic force oscillations and those of shaft displacement decreases from 3π/4 to π/2. Damping constant has maximum value under 1 MPa pressure in the gap. Increase of shaft oscillation frequency from 50 to 150 Hz under P=10 MPa causes growth of gas-dynamic force oscillation amplitude. Damping constant has maximum value at 50 Hz equaling 1.012. Increase of shaft vibration amplitude from 20 to 80 µm under P=10 MPa causes the rise of gas-dynamic force amplitude up to 20 times. Damping constant increases from 0.092 to 0.251. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the minimum gas-dynamic force persistent oscillating amplitude under P=0.1 MPa being observed in methane, and maximum in the air. Frequency remains almost unchanged and the phase shift in the air changes from 3π/4 to π/2. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the maximum gas-dynamic force oscillating amplitude under P=10 MPa being observed in methane, and minimum in the air. Air demonstrates surging. Increase of leakage speed from 0 to 20 m/s through LP under P=0.1 MPa causes the gas-dynamic force oscillating amplitude to decrease by 3 orders and oscillation frequency and the phase shift to increase 2 times and stabilize. Increase of leakage speed from 0 to 20 m/s in LP under P=1 MPa causes gas-dynamic force oscillating amplitude to decrease by almost 4 orders. The phase shift angle increases from π/72 to π/2. Oscillations become persistent. Flow rate proved to influence greatly on pressure oscillations amplitude and a phase shift angle. Work medium influence depends on operation conditions. At pressure growth, vibrations are mostly affected in methane (of working substances list considered), and at pressure decrease, in the air at 25 ˚С.Keywords: aeroelasticity, labyrinth packings, oscillation phase shift, vibration
Procedia PDF Downloads 296572 Conceptualization and Assessment of Key Competencies for Children in Preschools: A Case Study in Southwest China
Authors: Yumei Han, Naiqing Song, Xiaoping Yang, Yuping Han
Abstract:
This study explores the conceptualization of key competencies that children are expected to develop in three year preschools (age 3-6) and the assessment practices of such key competencies in China. Assessment of children development has been put into the central place of early childhood education quality evaluation system in China. In the context of students key competencies development centered education reform in China, defining and selecting key competencies of children in preschools are of great significance in that they would lay a solid foundation for children’s lifelong learning path, and they would lead to curriculum and instruction reform, teacher development reform as well as quality evaluation reform in the early childhood education area. Based on sense making theory and framework, this study adopted multiple stakeholders’ (early childhood educators, parents, evaluation administrators, scholars in the early childhood education field) perspectives and grass root voices to conceptualize and operationalize key competencies for children in preschools in Southwest China. On the ground of children development theories, Chinese and international literature related to children development and key competencies, and key competencies frameworks by UNESCO, OECD and other nations, the authors designed a two-phase sequential mixed method study to address three main questions: (a) How is early childhood key competency defined or labeled from literature and from different stakeholders’ views? (b) Based on the definitions explicated in the literature and the surveys on different stakeholders, what domains and components are regarded to constitute the key competency framework of children in three-year preschools in China? (c) How have early childhood key competencies been assessed and measured, and how such assessment and measurement contribute to enhancing early childhood development quality? On the first phase, a series of focus group surveys were conducted among different types of stakeholders around the research questions. Moreover, on the second phase, based on the coding of the participants’ answers, together with literature synthesis findings, a questionnaire survey was designed and conducted to select most commonly expected components of preschool children’s key competencies. Semi-structured open questions were also included in the questionnaire for the participants to add on competencies beyond the checklist. Rudimentary findings show agreeable concerns on the significance and necessity of conceptualization and assessment of key competencies for children in preschools, and a key competencies framework composed of 7 domains and 25 indicators was constructed. Meanwhile, the findings also show issues in the current assessment practices of children’s competencies, such as lack of effective assessment tools, lack of teacher capacity in applying the tools to evaluating children and advancing children development accordingly. Finally, the authors put forth suggestions and implications for China and international communities in terms of restructuring early childhood key competencies framework, and promoting child development centered reform in early childhood education quality evaluation and development.Keywords: assessment, conceptualization, early childhood education quality in China, key competencies
Procedia PDF Downloads 249571 Religiosity and Involvement in Purchasing Convenience Foods: Using Two-Step Cluster Analysis to Identify Heterogenous Muslim Consumers in the UK
Authors: Aisha Ijaz
Abstract:
The paper focuses on the impact of Muslim religiosity on convenience food purchases and involvement experienced in a non-Muslim culture. There is a scarcity of research on the purchasing patterns of Muslim diaspora communities residing in risk societies, particularly in contexts where there is an increasing inclination toward industrialized food items alongside a renewed interest in the concept of natural foods. The United Kingdom serves as an appropriate setting for this study due to the increasing Muslim population in the country, paralleled by the expanding Halal Food Market. A multi-dimensional framework is proposed, testing for five forms of involvement, specifically Purchase Decision Involvement, Product Involvement, Behavioural Involvement, Intrinsic Risk and Extrinsic Risk. Quantitative cross-sectional consumer data were collected through a face-to-face survey contact method with 141 Muslims during the summer of 2020 in Liverpool located in the Northwest of England. proportion formula was utilitsed, and the population of interest was stratified by gender and age before recruitment took place through local mosques and community centers. Six input variables were used (intrinsic religiosity and involvement dimensions), dividing the sample into 4 clusters using the Two-Step Cluster Analysis procedure in SPSS. Nuanced variances were observed in the type of involvement experienced by religiosity group, which influences behaviour when purchasing convenience food. Four distinct market segments were identified: highly religious ego-involving (39.7%), less religious active (26.2%), highly religious unaware (16.3%), less religious concerned (17.7%). These segments differ significantly with respects to their involvement, behavioural variables (place of purchase and information sources used), socio-cultural (acculturation and social class), and individual characteristics. Choosing the appropriate convenience food is centrally related to the value system of highly religious ego-involving first-generation Muslims, which explains their preference for shopping at ethnic food stores. Less religious active consumers are older and highly alert in information processing to make the optimal food choice, relying heavily on product label sources. Highly religious unaware Muslims are less dietary acculturated to the UK diet and tend to rely on digital and expert advice sources. The less-religious concerned segment, who are typified by younger age and third generation, are engaged with the purchase process because they are worried about making unsuitable food choices. Research implications are outlined and potential avenues for further explorations are identified.Keywords: consumer behaviour, consumption, convenience food, religion, muslims, UK
Procedia PDF Downloads 56570 Tailoring Piezoelectricity of PVDF Fibers with Voltage Polarity and Humidity in Electrospinning
Authors: Piotr K. Szewczyk, Arkadiusz Gradys, Sungkyun Kim, Luana Persano, Mateusz M. Marzec, Oleksander Kryshtal, Andrzej Bernasik, Sohini Kar-Narayan, Pawel Sajkiewicz, Urszula Stachewicz
Abstract:
Piezoelectric polymers have received great attention in smart textiles, wearables, and flexible electronics. Their potential applications range from devices that could operate without traditional power sources, through self-powering sensors, up to implantable biosensors. Semi-crystalline PVDF is often proposed as the main candidate for industrial-scale applications as it exhibits exceptional energy harvesting efficiency compared to other polymers combined with high mechanical strength and thermal stability. Plenty of approaches have been proposed for obtaining PVDF rich in the desired β-phase with electric polling, thermal annealing, and mechanical stretching being the most prevalent. Electrospinning is a highly tunable technique that provides a one-step process of obtaining highly piezoelectric PVDF fibers without the need for post-treatment. In this study, voltage polarity and relative humidity influence on electrospun PVDF, fibers were investigated with the main focus on piezoelectric β-phase contents and piezoelectric performance. Morphology and internal structure of fibers were investigated using scanning (SEM) and transmission electron microscopy techniques (TEM). Fourier Transform Infrared Spectroscopy (FITR), wide-angle X-ray scattering (WAXS) and differential scanning calorimetry (DSC) were used to characterize the phase composition of electrospun PVDF. Additionally, surface chemistry was verified with X-ray photoelectron spectroscopy (XPS). Piezoelectric performance of individual electrospun PVDF fibers was measured using piezoresponse force microscopy (PFM), and the power output from meshes was analyzed via custom-built equipment. To prepare the solution for electrospinning, PVDF pellets were dissolved in dimethylacetamide and acetone solution in a 1:1 ratio to achieve a 24% solution. Fibers were electrospun with a constant voltage of +/-15kV applied to the stainless steel nozzle with the inner diameter of 0.8mm. The flow rate was kept constant at 6mlh⁻¹. The electrospinning of PVDF was performed at T = 25°C and relative humidity of 30 and 60% for PVDF30+/- and PVDF60+/- samples respectively in the environmental chamber. The SEM and TEM analysis of fibers produced at a lower relative humidity of 30% (PVDF30+/-) showed a smooth surface in opposition to fibers obtained at 60% relative humidity (PVDF60+/-), which had wrinkled surface and additionally internal voids. XPS results confirmed lower fluorine content at the surface of PVDF- fibers obtained by electrospinning with negative voltage polarity comparing to the PVDF+ obtained with positive voltage polarity. Changes in surface composition measured with XPS were found to influence the piezoelectric performance of obtained fibers what was further confirmed by PFM as well as by custom-built fiber-based piezoelectric generator. For PVDF60+/- samples humidity led to an increase of β-phase contents in PVDF fibers as confirmed by FTIR, WAXS, and DSC measurements, which showed almost two times higher concentrations of β-phase. A combination of negative voltage polarity with high relative humidity led to fibers with the highest β-phase contents and the best piezoelectric performance of all investigated samples. This study outlines the possibility to produce electrospun PVDF fibers with tunable piezoelectric performance in a one-step electrospinning process by controlling relative humidity and voltage polarity conditions. Acknowledgment: This research was conducted within the funding from m the Sonata Bis 5 project granted by National Science Centre, No 2015/18/E/ST5/00230, and supported by the infrastructure at International Centre of Electron Microscopy for Materials Science (IC-EM) at AGH University of Science and Technology. The PFM measurements were supported by an STSM Grant from COST Action CA17107.Keywords: crystallinity, electrospinning, PVDF, voltage polarity
Procedia PDF Downloads 134569 Preventing Discharge to No Fixed Address-Youth (NFA-Y)
Authors: Cheryl Forchuk, Sandra Fisman, Steve Cordes, Dan Catunto, Katherine Krakowski, Melissa Jeffrey, John D’Oria
Abstract:
The discharge of youth aged 16-25 from hospital into homelessness is a prevalent issue despite research indicating social, safety, health and economic detriments on both the individual and community. Lack of stable housing for youth discharged into homelessness results in long-term consequences, including exacerbation of health problems and costly health care service use and hospital readmission. People experiencing homelessness are four times more likely to be readmitted within one month of discharge and hospitals must spend $2,559 more per client. Finding safe housing for these individuals is imperative to their recovery and transition back to the community. People discharged from hospital to homelessness experience challenges, including poor health outcomes and increased hospital readmissions. Youth are the fastest-growing subgroup of people experiencing homelessness in Canada. The needs of youth are unique and include supports related to education, employment opportunities, and age-related service barriers. This study aims to identify the needs of youth at risk of homelessness by evaluating the efficacy of the “Preventing Discharge to No Fixed Address – Youth” (NFA-Y) program, which aims to prevent youth from being discharged from hospital into homelessness. The program connects youth aged 16-25 who are inpatients at London Health Sciences Centre and St. Joseph’s Health Care London to housing and financial support. Supports are offered through collaboration with community partners: Youth Opportunities Unlimited, Canadian Mental Health Association Elgin Middlesex, City of London Coordinated Access, Ontario Works, and Salvation Army’s Housing Stability Bank. This study was reviewed and approved by Western University’s Research Ethics Board. A series of interviews are being conducted with approximately ninety-three youth participants at three time points: baseline (pre-discharge), six, and twelve months post-discharge. Focus groups with participants, health care providers, and community partners are being conducted at three-time points. In addition, administrative data from service providers will be collected and analyzed. Since homelessness has a detrimental effect on recovery, client and community safety, and healthcare expenditure, locating safe housing for psychiatric patients has had a positive impact on treatment, rehabilitation, and the system as a whole. If successful, the findings of this project will offer safe policy alternatives for the prevention of homelessness for at-risk youth, help set them up for success in their future years, and mitigate the rise of the homeless youth population in Canada.Keywords: youth homelessness, no-fixed address, mental health, homelessness prevention, hospital discharge
Procedia PDF Downloads 103568 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools
Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami
Abstract:
The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design
Procedia PDF Downloads 76567 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application
Abstract:
On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.Keywords: compass error, GPS, maritime navigation, mobile augmented reality
Procedia PDF Downloads 330566 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 143565 Analysis of the Outcome of the Treatment of Osteoradionecrosis in Patients after Radiotherapy for Head and Neck Cancer
Authors: Petr Daniel Kovarik, Matt Kennedy, James Adams, Ajay Wilson, Andy Burns, Charles Kelly, Malcolm Jackson, Rahul Patil, Shahid Iqbal
Abstract:
Introduction: Osteoradionecrosis (ORN) is a recognised toxicity of radiotherapy (RT) for head and neck cancer (HNC). Existing literature lacks any generally accepted definition and staging system for this toxicity. Objective: The objective is to analyse the outcome of the surgical and nonsurgical treatments of ORN. Material and Method: Data on 2303 patients treated for HNC with radical or adjuvant RT or RT-chemotherapy from January 2010 - December 2021 were retrospectively analysed. Median follow-up to the whole group of patients was 37 months (range 0–148 months). Results: ORN developed in 185 patients (8.1%). The location of ORN was as follows; mandible=170, maxilla=10, and extra oral cavity=5. Multiple ORNs developed in 7 patients. 5 patients with extra oral cavity ORN were excluded from treatment analysis as the management is different. In 180 patients with oral cavity ORN, median follow-up was 59 months (range 5–148 months). ORN healed in 106 patients, treatment failed in 74 patients (improving=10, stable=43, and deteriorating=21). Median healing time was 14 months (range 3-86 months). Notani staging is available in 158 patients with jaw ORN with no previous surgery to the mandible (Notani class I=56, Notani class II=27, and Notani class III=76). 28 ORN (mandible=27, maxilla=1; Notani class I=23, Notani II=3, Notani III=1) healed spontaneously with a median healing time 7 months (range 3–46 months). In 20 patients, ORN developed after dental extraction, in 1 patient in the neomandible after radical surgery as a part of the primary treatment. In 7 patients, ORN developed and spontaneously healed in irradiated bone with no previous surgical/dental intervention. Radical resection of the ORN (segmentectomy, hemi-mandibulectomy with fibula flap) was performed in 43 patients (all mandible; Notani II=1, Notani III=39, Notani class was not established in 3 patients as ORN developed in the neomandible). 27 patients healed (63%); 15 patients failed (improving=2, stable=5, deteriorating=8). The median time from resection to healing was 6 months (range 2–30 months). 109 patients (mandible=100, maxilla=9; Notani I=3, Notani II=23, Notani III=35, Notani class was not established in 9 patients as ORN developed in the maxilla/neomandible) were treated conservatively using a combination of debridement, antibiotics and Pentoclo. 50 patients healed (46%) with a median healing time 14 months (range 3–70 months), 59 patients are recorded with persistent ORN (improving=8, stable=38, deteriorating=13). Out of 109 patients treated conservatively, 13 patients were treated with Pentoclo only (all mandible; Notani I=6, Notani II=3, Notani III=3, 1 patient with neomandible). In total, 8 patients healed (61.5%), treatment failed in 5 patients (stable=4, deteriorating=1). Median healing time was 14 months (range 4–24 months). Extra orally (n=5), 3 cases of ORN were in the auditory canal and 2 in mastoid. ORN healed in one patient (auditory canal after 32 months. Treatment failed in 4 patients (improving=3, stable=1). Conclusion: The outcome of the treatment of ORN remains in general, poor. Every effort should therefore be made to minimise the risk of development of this devastating toxicity.Keywords: head and neck cancer, radiotherapy, osteoradionecrosis, treatment outcome
Procedia PDF Downloads 92564 The Therapeutic Potential, Functions, and Use of Ibogaine
Authors: João Pedro Zanella, Michel J. O. Fagundes
Abstract:
Introduction: Drug use has been practised by humans universally for millennia, not excluding any population from these habits, however, the rampant drug use is a global concern due to the harm that affects the health of the world population. In this sense, it is observed the reduction of lasting and effective public policies for the resolution, increasing the demand for treatment services. With this comes ibogaine, an alkaloid derived from the root of an African bush (Tabernanthe Iboga), found mostly in Gabon and used widely by the native Bwiti population in rituals, and also other social groups, which demonstrates efficacy against chemical dependence, psychic and emotional disorders, opioid withdrawal was first confirmed by a study in rats done by Michailo Dzoljic and associates in 1988 and again in 1994. Methods: A brief description of the plant, its neurohumoral potential and the effects caused by ingested doses, in a simplified and objective way, will be discussed in the course of this abstract. Results: Ibogaine is not registered or passed by Anvisa, regarding safety and efficacy, and cannot be sold in Brazil. Its illegal trade reaches R$ 5 thousand for a session with the proceeds of the root, and its effect can last up to 72 hours, attributing Iboga's psychoactive effects to the alkaloid called ibogaine. The shrub where Ibogaine is located has pink and yellow flowers, and its fruit produced does not have psychoactive substances, but its root bark contains 6 to 7% indolic alkaloids. Besides extraction from the iboga plant, ibogaine hydrochloride can be semisynthesized from voacangine, another plant alkaloid that acts as a precursor. Its potential has the ability to perform multiple interactions with the neurotransmitter system, which are closely associated with addiction, including nicotinic, opioid and serotoninergic systems. Studies carried out by Edwards found that the doses administered of Iboga should be determined by a health professional when its purpose is to treat individuals for dependence on other drugs. Its use in small doses may cause an increase in sensibility, impaired vision and motor alterations; in moderate quantities, hallucinations, motor and neurological alterations and impaired vision; in high quantities it may cause hallucinations with personal events at a deeper level lasting up to 24 hours or more, followed by motor and visual alterations. Conclusion: The product extracted from the Iboga plant is of great importance in controlling addiction, reducing the need for the use of narcotics by patients, thus gaining a space of extreme importance in the treatment of users of psychoactive substances. It is remarkable the progress of the latest’s research about the usefulness of Ibogaine, and its benefits for certain treatments, even with the restriction of its sale in Brazil. Besides this, Ibogaine has an additional benefit of helping the patient to gain self-control over their destructive behaviours.Keywords: alkaloids, dependence, Gabon, ibogaine
Procedia PDF Downloads 84563 The Influence of Active Breaks on the Attention/Concentration Performance in Eighth-Graders
Authors: Christian Andrä, Luisa Zimmermann, Christina Müller
Abstract:
Introduction: The positive relation between physical activity and cognition is commonly known. Relevant studies show that in everyday school life active breaks can lead to improvement in certain abilities (e.g. attention and concentration). A beneficial effect is in particular attributed to moderate activity. It is still unclear whether active breaks are beneficial after relatively short phases of cognitive load and whether the postulated effects of activity really have an immediate impact. The objective of this study was to verify whether an active break after 18 minutes of cognitive load leads to enhanced attention/concentration performance, compared to inactive breaks with voluntary mobile phone activity. Methodology: For this quasi-experimental study, 36 students [age: 14.0 (mean value) ± 0.3 (standard deviation); male/female: 21/15] of a secondary school were tested. In week 1, every student’s maximum heart rate (Hfmax) was determined through maximum effort tests conducted during physical education classes. The task was to run 3 laps of 300 m with increasing subjective effort (lap 1: 60%, lap 2: 80%, lap 3: 100% of the maximum performance capacity). Furthermore, first attention/concentration tests (D2-R) took place (pretest). The groups were matched on the basis of the pretest results. During week 2 and 3, crossover testing was conducted, comprising of 18 minutes of cognitive preload (test for concentration performance, KLT-R), a break and an attention/concentration test after a 2-minutes transition. Different 10-minutes breaks (active break: moderate physical activity with 65% Hfmax or inactive break: mobile phone activity) took place between preloading and transition. Major findings: In general, there was no impact of the different break interventions on the concentration test results (symbols processed after physical activity: 185.2 ± 31.3 / after inactive break: 184.4 ± 31.6; errors after physical activity: 5.7 ± 6.3 / after inactive break: 7.0. ± 7.2). There was, however, a noticeable development of the values over the testing periods. Although no difference in the number of processed symbols was detected (active/inactive break: period 1: 49.3 ± 8.8/46.9 ± 9.0; period 2: 47.0 ± 7.7/47.3 ± 8.4; period 3: 45.1 ± 8.3/45.6 ± 8.0; period 4: 43.8 ± 7.8/44.6 ± 8.0), error rates decreased successively after physical activity and increased gradually after an inactive break (active/inactive break: period 1: 1.9 ± 2.4/1.2 ± 1.4; period 2: 1.7 ± 1.8/ 1.5 ± 2.0, period 3: 1.2 ± 1.6/1.8 ± 2.1; period 4: 0.9 ± 1.5/2.5 ± 2.6; p= .012). Conclusion: Taking into consideration only the study’s overall results, the hypothesis must be dismissed. However, more differentiated evaluation shows that the error rates decreased after active breaks and increased after inactive breaks. Obviously, the effects of active intervention occur with a delay. The 2-minutes transition (regeneration time) used for this study seems to be insufficient due to the longer adaptation time of the cardio-vascular system in untrained individuals, which might initially affect the concentration capacity. To use the positive effects of physical activity for teaching and learning processes, physiological characteristics must also be considered. Only this will ensure optimum ability to perform.Keywords: active breaks, attention/concentration test, cognitive performance capacity, heart rate, physical activity
Procedia PDF Downloads 315562 Characterization of Anisotropic Deformation in Sandstones Using Micro-Computed Tomography Technique
Authors: Seyed Mehdi Seyed Alizadeh, Christoph Arns, Shane Latham
Abstract:
Geomechanical characterization of rocks in detail and its possible implications on flow properties is an important aspect of reservoir characterization workflow. In order to gain more understanding of the microstructure evolution of reservoir rocks under stress a series of axisymmetric triaxial tests were performed on two different analogue rock samples. In-situ compression tests were coupled with high resolution micro-Computed Tomography to elucidate the changes in the pore/grain network of the rocks under pressurized conditions. Two outcrop sandstones were chosen in the current study representing a various cementation status of well-consolidated and weakly-consolidated granular system respectively. High resolution images were acquired while the rocks deformed in a purpose-built compression cell. A detailed analysis of the 3D images in each series of step-wise compression tests (up to the failure point) was conducted which includes the registration of the deformed specimen images with the reference pristine dry rock image. Digital Image Correlation (DIC) technique based on the intensity of the registered 3D subsets and particle tracking are utilized to map the displacement fields in each sample. The results suggest the complex architecture of the localized shear zone in well-cemented Bentheimer sandstone whereas for the weakly-consolidated Castlegate sandstone no discernible shear band could be observed even after macroscopic failure. Post-mortem imaging a sister plug from the friable rock upon undergoing continuous compression reveals signs of a shear band pattern. This suggests that for friable sandstones at small scales loading mode may affect the pattern of deformation. Prior to mechanical failure, the continuum digital image correlation approach can reasonably capture the kinematics of deformation. As failure occurs, however, discrete image correlation (i.e. particle tracking) reveals superiority in both tracking the grains as well as quantifying their kinematics (in terms of translations/rotations) with respect to any stage of compaction. An attempt was made to quantify the displacement field in compression using continuum Digital Image Correlation which is based on the reference and secondary image intensity correlation. Such approach has only been previously applied to unconsolidated granular systems under pressure. We are applying this technique to sandstones with various degrees of consolidation. Such element of novelty will set the results of this study apart from previous attempts to characterize the deformation pattern in consolidated sands.Keywords: deformation mechanism, displacement field, shear behavior, triaxial compression, X-ray micro-CT
Procedia PDF Downloads 190561 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator
Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić
Abstract:
Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.Keywords: CT simulator, radiotherapy, quality control, QA programme
Procedia PDF Downloads 534560 Relationship Demise After Having Children: An Analysis of Abandonment and Nuclear Family Structure vs. Supportive Community Cultures
Authors: John W. Travis
Abstract:
There is an epidemic of couples separating after a child is born into a family, generally with the father leaving emotionally or physically in the first few years after birth. This separation creates high levels of stress for both parents, especially the primary parent, leaving her (or him) less available to the infant for healthy attachment and nurturing. The deterioration of the couple’s bond leaves parents increasingly under-resourced, and the dependent child in a compromised environment, with an increased likelihood of developing an attachment disorder. Objectives: To understand the dynamics of a couple, once the additional and extensive demands of a newborn are added to a nuclear family structure, and to identify effective ways to support all members of the family to thrive. Qualitative studies interviewed men, women, and couples after pregnancy and the early years as a family, regarding key destructive factors, as well as effective tools for the couple to retain a strong bond. In-depth analysis of a few cases, including the author’s own experience, reveal deeper insights about subtle factors, replicated in wider studies. Using a self-assessment survey, many fathers report feeling abandoned, due to the close bond of the mother-baby unit, and in turn, withdrawing themselves, leaving the mother without support and closeness to resource her for the baby. Fathers report various types of abandonment, from his partner to his mother, with whom he did not experience adequate connection as a child. The study identified a key destructive factor to be unrecognized wounding from childhood that was carried into the relationship. The study culminated in the naming of Male Postpartum Abandonment Syndrome (MPAS), describing the epidemic in industrialized cultures with the nuclear family as the primary configuration. A growing family system often collapses without a minimum number of adult caregivers per infant, approximately four per infant (3.87), which allows for proper healing and caretaking. In cases with no additional family or community beyond one or two parents, the layers of abandonment and trauma result in the deterioration of a couple’s relationship and ultimately the family structure. The solution includes engaging community in support of new families. The study identified (and recommends) specific resources to assist couples in recognizing and healing trauma and disconnection at multiple levels. Recommendations include wider awareness and availability of resources for healing childhood wounds and greater community-building efforts to support couples for the whole family to thrive.Keywords: abandonment, attachment, community building, family and marital functioning, healing childhood wounds, infant wellness, intimacy, marital satisfaction, relationship quality, relationship satisfaction
Procedia PDF Downloads 225559 A Cloud-Based Federated Identity Management in Europe
Authors: Jesus Carretero, Mario Vasile, Guillermo Izquierdo, Javier Garcia-Blas
Abstract:
Currently, there is a so called ‘identity crisis’ in cybersecurity caused by the substantial security, privacy and usability shortcomings encountered in existing systems for identity management. Federated Identity Management (FIM) could be solution for this crisis, as it is a method that facilitates management of identity processes and policies among collaborating entities without enforcing a global consistency, that is difficult to achieve when there are ID legacy systems. To cope with this problem, the Connecting Europe Facility (CEF) initiative proposed in 2014 a federated solution in anticipation of the adoption of the Regulation (EU) N°910/2014, the so-called eIDAS Regulation. At present, a network of eIDAS Nodes is being deployed at European level to allow that every citizen recognized by a member state is to be recognized within the trust network at European level, enabling the consumption of services in other member states that, until now were not allowed, or whose concession was tedious. This is a very ambitious approach, since it tends to enable cross-border authentication of Member States citizens without the need to unify the authentication method (eID Scheme) of the member state in question. However, this federation is currently managed by member states and it is initially applied only to citizens and public organizations. The goal of this paper is to present the results of a European Project, named eID@Cloud, that focuses on the integration of eID in 5 cloud platforms belonging to authentication service providers of different EU Member States to act as Service Providers (SP) for private entities. We propose an initiative based on a private eID Scheme both for natural and legal persons. The methodology followed in the eID@Cloud project is that each Identity Provider (IdP) is subscribed to an eIDAS Node Connector, requesting for authentication, that is subscribed to an eIDAS Node Proxy Service, issuing authentication assertions. To cope with high loads, load balancing is supported in the eIDAS Node. The eID@Cloud project is still going on, but we already have some important outcomes. First, we have deployed the federation identity nodes and tested it from the security and performance point of view. The pilot prototype has shown the feasibility of deploying this kind of systems, ensuring good performance due to the replication of the eIDAS nodes and the load balance mechanism. Second, our solution avoids the propagation of identity data out of the native domain of the user or entity being identified, which avoids problems well known in cybersecurity due to network interception, man in the middle attack, etc. Last, but not least, this system allows to connect any country or collectivity easily, providing incremental development of the network and avoiding difficult political negotiations to agree on a single authentication format (which would be a major stopper).Keywords: cybersecurity, identity federation, trust, user authentication
Procedia PDF Downloads 166