Search results for: index structural equation model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22941

Search results for: index structural equation model

8391 Development and Validation of a Turbidimetric Bioassay to Determine the Potency of Ertapenem Sodium

Authors: Tahisa M. Pedroso, Hérida R. N. Salgado

Abstract:

The microbiological turbidimetric assay allows the determination of potency of the drug, by measuring the turbidity (absorbance), caused by inhibition of microorganisms by ertapenem sodium. Ertapenem sodium (ERTM), a synthetic antimicrobial agent of the class of carbapenems, shows action against Gram-negative, Gram-positive, aerobic and anaerobic microorganisms. Turbidimetric assays are described in the literature for some antibiotics, but this method is not described for ertapenem. The objective of the present study was to develop and validate a simple, sensitive, precise and accurate microbiological assay by turbidimetry to quantify ertapenem sodium injectable as an alternative to the physicochemical methods described in the literature. Several preliminary tests were performed to choose the following parameters: Staphylococcus aureus ATCC 25923, IAL 1851, 8 % of inoculum, BHI culture medium, and aqueous solution of ertapenem sodium. 10.0 mL of sterile BHI culture medium were distributed in 20 tubes. 0.2 mL of solutions (standard and test), were added in tube, respectively S1, S2 and S3, and T1, T2 and T3, 0.8 mL of culture medium inoculated were transferred to each tube, according parallel lines 3 x 3 test. The tubes were incubated in shaker Marconi MA 420 at a temperature of 35.0 °C ± 2.0 °C for 4 hours. After this period, the growth of microorganisms was inhibited by addition of 0.5 mL of 12% formaldehyde solution in each tube. The absorbance was determined in Quimis Q-798DRM spectrophotometer at a wavelength of 530 nm. An analytical curve was constructed to obtain the equation of the line by the least-squares method and the linearity and parallelism was detected by ANOVA. The specificity of the method was proven by comparing the response obtained for the standard and the finished product. The precision was checked by testing the determination of ertapenem sodium in three days. The accuracy was determined by recovery test. The robustness was determined by comparing the results obtained by varying wavelength, brand of culture medium and volume of culture medium in the tubes. Statistical analysis showed that there is no deviation from linearity in the analytical curves of standard and test samples. The correlation coefficients were 0.9996 and 0.9998 for the standard and test samples, respectively. The specificity was confirmed by comparing the absorbance of the reference substance and test samples. The values obtained for intraday, interday and between analyst precision were 1.25%; 0.26%, 0.15% respectively. The amount of ertapenem sodium present in the samples analyzed, 99.87%, is consistent. The accuracy was proven by the recovery test, with value of 98.20%. The parameters varied did not affect the analysis of ertapenem sodium, confirming the robustness of this method. The turbidimetric assay is more versatile, faster and easier to apply than agar diffusion assay. The method is simple, rapid and accurate and can be used in routine analysis of quality control of formulations containing ertapenem sodium.

Keywords: ertapenem sodium, turbidimetric assay, quality control, validation

Procedia PDF Downloads 382
8390 Non-Destructive Test of Bar for Determination of Critical Compression Force Directed towards the Pole

Authors: Boris Blostotsky, Elia Efraim

Abstract:

The phenomenon of buckling of structural elements under compression is revealed in many cases of loading and found consideration in many structures and mechanisms. In the present work the method and results of dynamic test for buckling of bar loaded by a compression force directed towards the pole are considered. Experimental determination of critical force for such system has not been made previously. The tested object is a bar with semi-rigid connection to the base at one of its ends, and with a hinge moving along a circle at the other. The test includes measuring the natural frequency of the bar at different values of compression load. The lateral stiffness is calculated based on natural frequency and reduced mass on the bar's movable end. The critical load is determined by extrapolation the values of lateral stiffness up to zero value. For the experimental investigation the special test-bed was created that allows the stability testing at positive and negative curvature of the movable end's trajectory, as well as varying the rotational stiffness of the other end connection. Decreasing a friction at the movable end allows extend the diapason of applied compression force. The testing method includes: - Methodology of the experiment planning, that allows determine the required number of tests under various loads values in the defined range and the type of extrapolating function; - Methodology of experimental determination of reduced mass at the bar's movable end including its own mass; - Methodology of experimental determination of lateral stiffness of uncompressed bar rotational semi-rigid connection at the base. For planning the experiment and for comparison of the experimental results with the theoretical values of critical load, the analytical dependencies of lateral stiffness of the bar with defined end conditions on compression load. In the particular case of perfectly rigid connection of the bar to the base, the critical load value corresponds to solution by S.P. Timoshenko. Correspondence of the calculated and experimental values was obtained.

Keywords: non-destructive test, buckling, dynamic method, semi-rigid connections

Procedia PDF Downloads 343
8389 Beam Deflection with Unidirectionality Due to Zeroth Order and Evanescent Wave Coupling in a Photonic Crystal with a Defect Layer without Corrugations under Oblique Incidence

Authors: Evrim Colak, Andriy E. Serebryannikov, Thore Magath, Ekmel Ozbay

Abstract:

Single beam deflection and unidirectional transmission are examined for oblique incidence in a Photonic Crystal (PC) structure which employs defect layer instead of surface corrugations at the interfaces. In all of the studied cases, the defect layer is placed such that the symmetry is broken. Two types of deflection are observed depending on whether the zeroth order is coupled or not. These two scenarios can be distinguished from each other by considering the simulated field distribution in PC. In the first deflection type, Floquet-Bloch mode enables zeroth order coupling. The energy of the zeroth order is redistributed between the diffraction orders at the defect layer, providing deflection. In the second type, when zeroth order is not coupled, strong diffractions cause blazing and the evanescent waves deliver energy to higher order diffraction modes. Simulated isofrequency contours can be utilized to estimate the coupling behavior. The defect layer is placed at varying rows, preserving the asymmetry of PC while evancescent waves can still couple to higher order modes. Even for deeply buried defect layer, asymmetric transmission and beam deflection are still encountered when the zeroth order is not coupled. We assume ε=11.4 (refractive index close to that of GaAs and Si) for the PC rods. A possible operation wavelength can be within microwave and infrared range. Since the suggested material is low loss, the structure can be scaled down to operate higher frequencies. Thus, a sample operation wavelength is selected as 1.5μm. Although the structure employs no surface corrugations transmission value T≈0.97 can be achieved by means of diffraction order m=-1. Moreover, utilizing an extra line defect, T value can be increased upto 0.99, under oblique incidence even if the line defect layer is deeply embedded in the photonic crystal. The latter configuration can be used to obtain deflection in one frequency range and can also be utilized for the realization of another functionality like defect-mode wave guiding in another frequency range but still using the same structure.

Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality

Procedia PDF Downloads 249
8388 Using Serious Games to Integrate the Potential of Mass Customization into the Fuzzy Front-End of New Product Development

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Mass customization is the idea of offering custom products or services to satisfy the needs of each individual customer while maintaining the efficiency of mass production. Technologies like 3D printing and artificial intelligence have many start-ups hoping to capitalize on this dream of creating personalized products at an affordable price, and well established companies scrambling to innovate and maintain their market share. However, the majority of them are failing as they struggle to understand one key question – where does customization make sense? Customization and personalization only make sense where the value of the perceived benefit outweighs the cost to implement it. In other words, will people pay for it? Looking at the Kano Model makes it clear that it depends on the product. In products where customization is an inherent need, like prosthetics, mass customization technologies can be highly beneficial. However, for products that already sell as a standard, like headphones, offering customization is likely only an added bonus, and so the product development team must figure out if the customers’ perception of the added value of this feature will outweigh its premium price tag. This can be done through the use of a ‘serious game,’ whereby potential customers are given a limited budget to collaboratively buy and bid on potential features of the product before it is developed. If the group choose to buy customization over other features, then the product development team should implement it into their design. If not, the team should prioritize the features on which the customers have spent their budget. The level of customization purchased can also be translated to an appropriate production method, for example, the most expensive type of customization would likely be free-form design and could be achieved through digital fabrication, while a lower level could be achieved through short batch production. Twenty-five teams of final year students from design, engineering, construction and technology tested this methodology when bringing a product from concept through to production specification, and found that it allowed them to confidently decide what level of customization, if any, would be worth offering for their product, and what would be the best method of producing it. They also found that the discussion and negotiations between players during the game led to invaluable insights, and often decided to play a second game where they offered customers the option to buy the various customization ideas that had been discussed during the first game.

Keywords: Kano model, mass customization, new product development, serious game

Procedia PDF Downloads 120
8387 ScRNA-Seq RNA Sequencing-Based Program-Polygenic Risk Scores Associated with Pancreatic Cancer Risks in the UK Biobank Cohort

Authors: Yelin Zhao, Xinxiu Li, Martin Smelik, Oleg Sysoev, Firoj Mahmud, Dina Mansour Aly, Mikael Benson

Abstract:

Background: Early diagnosis of pancreatic cancer is clinically challenging due to vague, or no symptoms, and lack of biomarkers. Polygenic risk score (PRS) scores may provide a valuable tool to assess increased or decreased risk of PC. This study aimed to develop such PRS by filtering genetic variants identified by GWAS using transcriptional programs identified by single-cell RNA sequencing (scRNA-seq). Methods: ScRNA-seq data from 24 pancreatic ductal adenocarcinoma (PDAC) tumor samples and 11 normal pancreases were analyzed to identify differentially expressed genes (DEGs) in in tumor and microenvironment cell types compared to healthy tissues. Pathway analysis showed that the DEGs were enriched for hundreds of significant pathways. These were clustered into 40 “programs” based on gene similarity, using the Jaccard index. Published genetic variants associated with PDAC were mapped to each program to generate program PRSs (pPRSs). These pPRSs, along with five previously published PRSs (PGS000083, PGS000725, PGS000663, PGS000159, and PGS002264), were evaluated in a European-origin population from the UK Biobank, consisting of 1,310 PDAC participants and 407,473 non-pancreatic cancer participants. Stepwise Cox regression analysis was performed to determine associations between pPRSs with the development of PC, with adjustments of sex and principal components of genetic ancestry. Results: The PDAC genetic variants were mapped to 23 programs and were used to generate pPRSs for these programs. Four distinct pPRSs (P1, P6, P11, and P16) and two published PRSs (PGS000663 and PGS002264) were significantly associated with an increased risk of developing PC. Among these, P6 exhibited the greatest hazard ratio (adjusted HR[95% CI] = 1.67[1.14-2.45], p = 0.008). In contrast, P10 and P4 were associated with lower risk of developing PC (adjusted HR[95% CI] = 0.58[0.42-0.81], p = 0.001, and adjusted HR[95% CI] = 0.75[0.59-0.96], p = 0.019). By comparison, two of the five published PRS exhibited an association with PDAC onset with HR (PGS000663: adjusted HR[95% CI] = 1.24[1.14-1.35], p < 0.001 and PGS002264: adjusted HR[95% CI] = 1.14[1.07-1.22], p < 0.001). Conclusion: Compared to published PRSs, scRNA-seq-based pPRSs may be used not only to assess increased but also decreased risk of PDAC.

Keywords: cox regression, pancreatic cancer, polygenic risk score, scRNA-seq, UK biobank

Procedia PDF Downloads 86
8386 The Role of Group Interaction and Managers’ Risk-willingness for Business Model Innovation Decisions: A Thematic Analysis

Authors: Sarah Müller-Sägebrecht

Abstract:

Today’s volatile environment challenges executives to make the right strategic decisions to gain sustainable success. Entrepreneurship scholars postulate mainly positive effects of environmental changes on entrepreneurship behavior, such as developing new business opportunities, promoting ingenuity, and the satisfaction of resource voids. A strategic solution approach to overcome threatening environmental changes and catch new business opportunities is business model innovation (BMI). Although this research stream has gained further importance in the last decade, BMI research is still insufficient. Especially BMI barriers, such as inefficient strategic decision-making processes, need to be identified. Strategic decisions strongly impact organizational future and are, therefore, usually made in groups. Although groups draw on a more extensive information base than single individuals, group-interaction effects can influence the decision-making process - in a favorable but also unfavorable way. Decisions are characterized by uncertainty and risk, whereby their intensity is perceived individually differently. The individual risk-willingness influences which option humans choose. The special nature of strategic decisions, such as in BMI processes, is that these decisions are not made individually but in groups due to their high organizational scope. These groups consist of different personalities whose individual risk-willingness can vary considerably. It is known from group decision theory that these individuals influence each other, observable in different group-interaction effects. The following research questions arise: i) How does group interaction shape BMI decision-making from managers’ perspective? ii) What are the potential interrelations among managers’ risk-willingness, group biases, and BMI decision-making? After conducting 26 in-depth interviews with executives from the manufacturing industry, applied Gioia methodology reveals the following results: i) Risk-averse decision-makers have an increased need to be guided by facts. The more information available to them, the lower they perceive uncertainty and the more willing they are to pursue a specific decision option. However, the results also show that social interaction does not change the individual risk-willingness in the decision-making process. ii) Generally, it could be observed that during BMI decisions, group interaction is primarily beneficial to increase the group’s information base for making good decisions, less than for social interaction. Further, decision-makers mainly focus on information available to all decision-makers in the team but less on personal knowledge. This work contributes to strategic decision-making literature twofold. First, it gives insights into how group-interaction effects influence an organization’s strategic BMI decision-making. Second, it enriches risk-management research by highlighting how individual risk-willingness impacts organizational strategic decision-making. To date, it was known in BMI research that risk aversion would be an internal BMI barrier. However, with this study, it becomes clear that it is not risk aversion that inhibits BMI. Instead, the lack of information prevents risk-averse decision-makers from choosing a riskier option. Simultaneously, results show that risk-averse decision-makers are not easily carried away by the higher risk-willingness of their team members. Instead, they use social interaction to gather missing information. Therefore, executives need to provide sufficient information to all decision-makers to catch promising business opportunities.

Keywords: business model innovation, cognitive biases, group-interaction effects, strategic decision-making, risk-willingness

Procedia PDF Downloads 65
8385 Mainland China and Taiwan’s Strategies for Overcoming the Middle/High Income Trap: Domestic Consensus-Building and the Foundations of Cross-Strait Interactions

Authors: Mingke Ma

Abstract:

The recent discovery of the High-Income Trap phenomena and the established Middle-Income Trap literature have identified the similarity of the structural challenges that both Mainland China and Taiwan have been facing since the simultaneous growth slowdown from the 2000s. Mainland China and Taiwan’s ineffectiveness in productivity growth weakened their overall competitiveness in Global Value Chains. With the subsequent decline of industrial profitability, social compression from late development persists and jeopardises the social cohesion. From Ma Ying-jeou’s ‘633’ promise and Tsai Ing-wen’s ‘5+2’ industrial framework to Mainland China’s 11th to 14th Five-Year Plans, leaderships across the Strait have been striving to constitute new models for inclusive and sustainable development through policy responses. This study argues that social consensuses that have been constructed by the domestic political processes define the feasibility of the reform strategies, which further construct the conditions for Cross-Strait interactions. Based on the existing literature of New Institutional Economics, Middle/High Income Trap, and Compressed Development, this study adopts a Historical Institutionalist analytical framework to identify how the historical path-dependency contributes to the contemporary growth constraints in both economies and the political difficulty on navigating the institutional and Organisational change. It continues by tracing the political process of economic reform to examine the sustainability and resilience of the manifested social consensus that had empowered the proposed policy frameworks. Afterwards, it examines how the political outcomes in such a simultaneous process shared by both Mainland China and Taiwan construct the social, economic, institutional, and political foundations of contemporary Cross-Strait engagement.

Keywords: historical institutionalism, political economy, cross-strait relations, high/middle income trap

Procedia PDF Downloads 180
8384 Influence of the Induction Program on Novice Teacher Retention In One Specialized School in Nur-Sultan

Authors: Almagul Nurgaliyeva

Abstract:

The phenomenon of novice teacher attrition is an urgent issue. The effective mechanisms to increase the retention rate of novice teachers relate to the nature and level of support provided at an employing site. This study considered novice teacher retention as a motivation-based process, which is based on a variety of support activities employed to satisfy novice teachers’ needs at an early career stage. The purpose of the study was to examine novice teachers’ perceptions of the effectiveness of the induction program and other support structure(s) at a secondary school in Nur-Sultan. The study was guided by Abraham Maslow’s (1943) theory of motivation. Maslow’s hierarchy of needs was used as a theoretical framework to identify the novice teachers’ primary needs and the extent to which the induction programs and other support mechanisms provided by the school administrators fulfill those needs. One school supervisor and eight novice teachers (four current and four former novice teachers) with a maximum of four years of teaching experience took part in the study. To investigate the perspectives and experiences of the participants, an online semi-structured interview was utilized. The responses were collected and analyzed. The study revealed four major challenges: educational, personal-psychological, sociological, and structural which are seen as the main constraints during the adaptation period. Four induction activities, as emerged from the data, are being carried out by the school to address novice teachers’ challenges: socialization activities, mentoring programs, professional development, and administrative support. These activities meet novice teachers’ needs and confront the challenges they face. Sufficient and adequate support structures provided to novice teachers during their first years of working experience is essential, as they may influence their decision to remain in the teaching profession, thereby reducing the attrition rate. The study provides recommendations for policymakers and school administrators about the structure and the content of induction program activities.

Keywords: beginning teacher induction, induction programme, orientation programmes, adaptation challenges, novice teacher retention

Procedia PDF Downloads 67
8383 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 130
8382 Examining the Role of Corporate Culture in Driving Firm Performance

Authors: Lovorka Galetić, Ivana Načinović Braje, Nevenka Čavlek

Abstract:

The purpose of this paper is to analyze the relationship between corporate culture and firm performance. Extensive theoretical and empirical evidence on this issue is provided. A quantitative methodology was used to explore relationship between corporate culture and performance among large Croatian companies. Corporate culture was explored by using Denison framework. The research revealed a positive, statistically significant relationship between mission and performance. Other dimensions of corporate culture (involvement, consistency and adaptability) show only partial relationship with performance.

Keywords: corporate culture, Croatia, Denison culture model, performance

Procedia PDF Downloads 508
8381 Prevalence and Risk Factors of Cardiovascular Diseases among Bangladeshi Adults: Findings from a Cross Sectional Study

Authors: Fouzia Khanam, Belal Hossain, Kaosar Afsana, Mahfuzar Rahman

Abstract:

Aim: Although cardiovascular diseases (CVD) has already been recognized as a major cause of death in developed countries, its prevalence is rising in developing countries as well, and engendering a challenge for the health sector. Bangladesh has experienced an epidemiological transition from communicable to non-communicable diseases over the last few decades. So, the rising prevalence of CVD and its risk factors are imposing a major problem for the country. We aimed to examine the prevalence of CVDs and socioeconomic and lifestyle factors related to it from a population-based survey. Methods: The data used for this study were collected as a part of a large-scale cross-sectional study conducted to explore the overall health status of children, mothers and senior citizens of Bangladesh. Multistage cluster random sampling procedure was applied by considering unions as clusters and households as the primary sampling unit to select a total of 11,428 households for the base survey. Present analysis encompassed 12338 respondents of ≥ 35 years, selected from both rural areas and urban slums of the country. Socio-economic, demographic and lifestyle information were obtained through individual by a face-to-face interview which was noted in ODK platform. And height, weight, blood pressure and glycosuria were measured using standardized methods. Chi-square test, Univariate modified Poisson regression model, and multivariate modified Poisson regression model were done using STATA software (version 13.0) for analysis. Results: Overall, the prevalence of CVD was 4.51%, of which 1.78% had stroke and 3.17% suffered from heart diseases. Male had higher prevalence of stroke (2.20%) than their counterparts (1.37%). Notably, thirty percent of respondents had high blood pressure and 5% population had diabetes and more than half of the population was pre-hypertensive. Additionally, 20% were overweight, 77% were smoker or consumed smokeless tobacco and 28% of respondents were physically inactive. Eighty-two percent of respondents took extra salt while eating and 29% of respondents had deprived sleep. Furthermore, the prevalence of risk factor of CVD varied according to gender. Women had a higher prevalence of overweight, obesity and diabetes. Women were also less physically active compared to men and took more extra salt. Smoking was lower in women compared to men. Moreover, women slept less compared to their counterpart. After adjusting confounders in modified Poisson regression model, age, gender, occupation, wealth quintile, BMI, extra salt intake, daily sleep, tiredness, diabetes, and hypertension remained as risk factors for CVD. Conclusion: The prevalence of CVD is significant in Bangladesh, and there is an evidence of rising trend for its risk factors such as hypertension, diabetes especially in older population, women and high-income groups. Therefore, in this current epidemiological transition, immediate public health intervention is warranted to address the overwhelming CVD risk.

Keywords: cardiovascular diseases, diabetes, hypertension, stroke

Procedia PDF Downloads 366
8380 The Fragility of Sense: The Twofold Temporality of Embodiment and Its Role for Depression

Authors: Laura Bickel

Abstract:

This paper aims to investigate to what extent Merleau-Ponty’s philosophy of body memory serves as a viable resource for the enactive approach to cognitive science and its first-person experience-based research on ‘recurrent depressive disorder’ coded F33 in ICD-10. In pursuit of this goal, the analysis begins by revisiting the neuroreductive paradigm. This paradigm serves biological psychiatry to explain the condition of vital contact in terms of underlying neurophysiological mechanisms. It is demonstrated that the neuroreductive model cannot sufficiently account for the depressed person’s episodical withdrawal in causal terms. The analysis of the irregular loss of vital resonance requires integrating the body as the subject of experience and its phenomenological time. Then, it is shown that the enactive approach to depression as disordered sense-making is a promising alternative. The enactive model of perception implies that living beings do not register pre-existing meaning ‘out there’ but unfold ‘sense’ in their action-oriented response to the world. For the enactive approach, Husserl’s passive synthesis of inner time consciousness is fundamental for what becomes perceptually present for action. It seems intuitive to bring together the enactive approach to depression with the long-standing view in phenomenological psychopathology that explains the loss of vital contact by appealing to the disruption of the temporal structure of consciousness. However, this paper argues that the disruption of the temporal structure is not justified conceptually. Instead, one may integrate Merleau-Ponty’s concept of the past as the unconscious into the enactive approach to depression. From this perspective, the living being’s experiential and biological past inserts itself in the form of habit and bodily skills and ensures action-oriented responses to the environment. Finally, it is concluded that the depressed person’s withdrawal indicates the impairment of this application process. The person suffering from F33 cannot actualize sedimented meaning to respond to the valences and tasks of a given situation.

Keywords: depression, enactivism, neuroreductionsim, phenomenology, temporality

Procedia PDF Downloads 118
8379 In vitro Modeling of Aniridia-Related Keratopathy by the Use of Crispr/Cas9 on Limbal Epithelial Cells and Rescue

Authors: Daniel Aberdam

Abstract:

Haploinsufficiency of PAX6 in humans is the main cause of congenital aniridia, a rare eye disease characterized by reduced visual acuity. Patients have also progressive disorders including cataract, glaucoma and corneal abnormalities making their condition very challenging to manage. Aniridia-related keratopathy (ARK), caused by a combination of factors including limbal stem-cell deficiency, impaired healing response, abnormal differentiation, and infiltration of conjunctival cells onto the corneal surface, affects up to 95% of patients. It usually begins in the first decade of life resulting in recurrent corneal erosions, sub-epithelial fibrosis with corneal decompensation and opacification. Unfortunately, current treatment options for aniridia patients are currently limited. Although animal models partially recapitulate this disease, there is no in vitro cellular model of AKT needed for drug/therapeutic tools screening and validation. We used genome editing (CRISPR/Cas9 technology) to introduce a nonsense mutation found in patients into one allele of the PAX6 gene into limbal stem cells. Resulting mutated clones, expressing half of the amount of PAX6 protein and thus representative of haploinsufficiency were further characterized. Sequencing analysis showed that no off-target mutations were induced. The mutated cells displayed reduced cell proliferation and cell migration but enhanced cell adhesion. Known PAX6 targets expression was also reduced. Remarkably, addition of soluble recombinant PAX6 protein into the culture medium was sufficient to activate endogenous PAX6 gene and, as a consequence, rescue the phenotype. It strongly suggests that our in vitro model recapitulates well the epithelial defect and becomes a powerful tool to identify drugs that could rescue the corneal defect in patients. Furthermore, we demonstrate that the homeotic transcription factor Pax6 is able to be uptake naturally by recipient cells to function into the nucleus.

Keywords: Pax6, crispr/cas9, limbal stem cells, aniridia, gene therapy

Procedia PDF Downloads 193
8378 Modeling and Analyzing the WAP Class 2 Wireless Transaction Protocol Using Event-B

Authors: Rajaa Filali, Mohamed Bouhdadi

Abstract:

This paper presents an incremental formal development of the Wireless Transaction Protocol (WTP) in Event-B. WTP is part of the Wireless Application Protocol (WAP) architectures and provides a reliable request-response service. To model and verify the protocol, we use the formal technique Event-B which provides an accessible and rigorous development method. This interaction between modelling and proving reduces the complexity and helps to eliminate misunderstandings, inconsistencies, and specification gaps. As result, verification of WTP allows us to find some deficiencies in the current specification.

Keywords: event-B, wireless transaction protocol, proof obligation, refinement, Rodin, ProB

Procedia PDF Downloads 303
8377 Effect of Humic Acids on Agricultural Soil Structure and Stability and Its Implication on Soil Quality

Authors: Omkar Gaonkar, Indumathi Nambi, Suresh G. Kumar

Abstract:

The functional and morphological aspects of soil structure determine the soil quality. The dispersion of colloidal soil particles, especially the clay fraction and rupture of soil aggregates, both of which play an important role in soil structure development, lead to degradation of soil quality. The main objective of this work was to determine the effect of the behaviour of soil colloids on the agricultural soil structure and quality. The effect of commercial humic acid and soil natural organic matter on the electrical and structural properties of the soil colloids was also studied. Agricultural soil, belonging to the sandy loam texture class from northern part of India was considered in this study. In order to understand the changes in the soil quality in the presence and absence of humic acids, the soil fabric and structure was analyzed by X-ray diffraction (XRD), Fourier Transform Infrared (FTIR) Spectroscopy and Scanning Electron Microscopy (SEM). Electrical properties of natural soil colloids in aqueous suspensions were assessed by zeta potential measurements at varying pH values with and without the presence of humic acids. The influence of natural organic matter was analyzed by oxidizing the natural soil organic matter with hydrogen peroxide. The zeta potential of the soil colloids was found to be negative in the pH range studied. The results indicated that hydrogen peroxide treatment leads to deflocculation of colloidal soil particles. In addition, the humic acids undergoes effective adsorption onto the soil surface imparting more negative zeta potential to the colloidal soil particles. The soil hydrophilicity decreased in the presence of humic acids which was confirmed by surface free energy determination. Thus, it can be concluded that the presence of humic acids altered the soil fabric and structure, thereby affecting the soil quality. This study assumes significance in understanding soil aggregation and the interactions at soil solid-liquid interface.

Keywords: humic acids, natural organic matter, zeta potential, soil quality

Procedia PDF Downloads 232
8376 Embedding Employability in the Curriculum: Experiences from New Zealand

Authors: Narissa Lewis, Susan Geertshuis

Abstract:

The global and national employability agenda is changing the higher education landscape as academic staff are faced with the responsibility of developing employability capabilities and attributes in addition to delivering discipline specific content and skills. They realise that the shift towards teaching sustainable capabilities means a shift in the way they teach. But what that shift should be or how they should bring it about is unclear. As part of a national funded project, representatives from several New Zealand (NZ) higher education institutions and the NZ Association of Graduate Employers partnered to discover, trial and disseminate means of embedding employability in the curriculum. Findings from four focus groups (n=~75) and individual interviews (n=20) with staff from several NZ higher education institutions identified factors that enable or hinder embedded employability development within their respective institutions. Participants believed that higher education institutions have a key role in developing graduates for successful lives and careers however this requires a significant shift in culture within their respective institutions. Participants cited three main barriers: lack of strategic direction, support and guidance; lack of understanding and awareness of employability; and lack of resourcing and staff capability. Without adequate understanding and awareness of employability, participants believed it is difficult to understand what employability is let alone how it can be embedded in the curriculum. This presentation will describe some of the impacts that the employability agenda has on staff as they try to move from traditional to contemporary forms of teaching to develop employability attributes of students. Changes at the institutional level are required to support contemporary forms of teaching, however this is often beyond the sphere of influence at the teaching staff level. The study identified that small changes to teaching practices were necessary and a simple model to facilitate change from traditional to contemporary forms of teaching was developed. The model provides a framework to identify small but impactful teaching practices and exemplar teaching practices were identified. These practices were evaluated for transferability into other contexts to encourage small but impactful changes to embed employability in the curriculum.

Keywords: curriculum design, change management, employability, teaching exemplars

Procedia PDF Downloads 317
8375 The Relevance of (Re)Designing Professional Paths with Unemployed Working-Age Adults

Authors: Ana Rodrigues, Maria Cadilhe, Filipa Ferreira, Claudia Pereira, Marta Santos

Abstract:

Professional paths must be understood in the multiplicity of their possible configurations. While some actors tend to represent their path as a harmonious succession of positions in the life cycle, most recognize the existence of unforeseen and uncontrollable bifurcations, caused, for example, by a work accident or by going through a period of unemployment. Considering the intensified challenges posed by the ongoing societal changes (e.g., technological and demographic), and looking at the Portuguese context, where the unemployment rate continues to be more evident in certain age groups, like in individuals aged 45 years or over, it is essential to support those adults by providing strategies capable of supporting them during professional transitions, being this a joint responsibility of governments, employers, workers, educational institutions, among others. Concerned about those issues, Porto City Council launched the challenge of designing and implementing a Lifelong Career Guidance program, which was answered with the presentation of a customized conceptual and operational model: groWing|Lifelong Career Guidance. A pilot project targeting working-age adults (35 or older) who were unemployed was carried out, aiming to support them to reconstruct their professional paths, through the recovery of their past experiences and through a reflection about dimensions such as skills, interests, constraints, and labor market. A research action approach was used to assess the proposed model, namely the perceived relevance of the theme and of the project, by adults themselves (N=44), employment professionals (N=15) and local companies (N=15), in an integrated manner. A set of activities were carried out: a train the trainer course and a monitoring session with employment professionals; collective and individual sessions with adults, including a monitoring session as well; and a workshop with local companies. Support materials for individual/collective reflection about professional paths were created and adjusted for each involved agent. An evaluation model was co-build by different stakeholders. Assessment was carried through a form created for the purpose, completed at the end of the different activities, which allowed us to collect quantitative and qualitative data. Statistical analysis was carried through SPSS software. Results showed that the participants, as well as the employment professionals and the companies involved, considered both the topic and the project as extremely relevant. Also, adults saw the project as an opportunity to reflect on their paths and become aware of the opportunities and the necessary conditions to achieve their goals; the professionals highlighted the support given by an integrated methodology and the existence of tools to assist the process; companies valued the opportunity to think about the topic and the possible initiatives they could implement within the company to diversify their recruitment pool. The results allow us to conclude that, in the local context under study, there is an alignment between different agents regarding the pertinence of supporting adults with work experience in professional transitions, seeing the project as a relevant strategy to address this issue, which justifies that it can be extended in time and to other working-age adults in the future.

Keywords: professional paths, research action, turning points, lifelong career guidance, relevance

Procedia PDF Downloads 73
8374 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 106
8373 The Influence of Firm Characteristics on Profitability: Evidence from Italian Hospitality Industry

Authors: Elisa Menicucci, Guido Paolucci

Abstract:

Purpose: The aim of this paper is to investigate the factors influencing profitability in the Italian hospitality industry during the period 2008-2016. Design/methodology/approach: This study examines the profitability and its determinants using a sample of 2366 Italian hotel firms. First, we use a multidimensional measure of profitability including attributes as return on equity, return on assets and occupancy rate. Second, we examine variables that are potentially related with performance and we sort these into five categories: market variables, business model, ownership structure, management education and control variables. Findings: The results show that financial crisis, business model and ownership structure influence profitability of hotel firms. Specific factors such as the internationalization, location, firm’s declaring accommodation as their primary activity and chain affiliation are associated positively with profitability. We also find that larger hotel firms have higher performance rankings, while hotels with higher operating cash flow volatility, greater sales volatility and a higher occurrence of losses have lower profitability. Research limitations/implications: Findings suggest the importance of considering firm specific factors to evaluate the profitability of a hotel firm. Results also provide evidence for academics to critically evaluate factors that would ensure profitability of hotels in developed countries such as Italy. Practical implications: This investigation offers valuable information and strategic implications for government, tourism policymakers, tourist hotel owners, hoteliers and tourism managers in their decision-making. Originality/value: This paper provides interesting insights into the characteristics and practices of profitable hotels in Italy. Few econometric studies empirically explored the determinants of performance in the European hospitality field so far. Therefore, this paper tries to close an important gap in the existing literature improving the understanding of profitability in the Italian hospitality industry.

Keywords: hotel firms, profitability, determinants, Italian hospitality industry

Procedia PDF Downloads 362
8372 The Effect of Penalizing Wrong Answers in the Computerized Modified Multiple Choice Testing System

Authors: Min Hae Song, Jooyong Park

Abstract:

Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests. Multiple-choice tests (MC) are efficient in automatic grading, however structural problems of multiple-choice tests allow students to find the correct answer from options even though they do not know the answer. A computerized modified multiple-choice testing system (CMMT) was developed using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. This study was conducted to find out whether penalizing for wrong answers in CMMT could lower random guessing. In this study, we checked whether students knew the answers by having them respond to the short-answer tests before choosing the given options in CMMT or MC format. Ninety-four students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: two conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. In the low penalty condition, the penalty rate was the probability of getting the correct answer by random guessing. In the high penalty condition, students were penalized at twice the percentage of the low penalty condition. The results showed that the number of no response was significantly higher for the CMMT format and the number of random guesses was significantly lower for the CMMT format. There were no significant between the two penalty conditions. This result may be due to the fact that the actual score difference between the two conditions was too small. In the discussion, the possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed.

Keywords: computerized modified multiple choice test format, multiple-choice test format, penalizing, test format

Procedia PDF Downloads 158
8371 An Assessment of Impact of Financial Statement Fraud on Profit Performance of Manufacturing Firms in Nigeria: A Study of Food and Beverage Firms in Nigeria

Authors: Wale Agbaje

Abstract:

The aim of this research study is to assess the impact of financial statement fraud on profitability of some selected Nigerian manufacturing firms covering (2002-2016). The specific objectives focused on to ascertain the effect of incorrect asset valuation on return on assets (ROA) and to ascertain the relationship between improper expense recognition and return on assets (ROA). To achieve these objectives, descriptive research design was used for the study while secondary data were collected from the financial reports of the selected firms and website of security and exchange commission. The analysis of covariance (ANCOVA) was used and STATA II econometric method was used in the analysis of the data. Altman model and operating expenses ratio was adopted in the analysis of the financial reports to create a dummy variable for the selected firms from 2002-2016 and validation of the parameters were ascertained using various statistical techniques such as t-test, co-efficient of determination (R2), F-statistics and Wald chi-square. Two hypotheses were formulated and tested using the t-statistics at 5% level of significance. The findings of the analysis revealed that there is a significant relationship between financial statement fraud and profitability in Nigerian manufacturing industry. It was revealed that incorrect assets valuation has a significant positive relationship and so also is the improper expense recognition on return on assets (ROA) which serves as a proxy for profitability. The implication of this is that distortion of asset valuation and expense recognition leads to decreasing profit in the long run in the manufacturing industry. The study therefore recommended that pragmatic policy options need to be taken in the manufacturing industry to effectively manage incorrect asset valuation and improper expense recognition in order to enhance manufacturing industry performance in the country and also stemming of financial statement fraud should be adequately inculcated into the internal control system of manufacturing firms for the effective running of the manufacturing industry in Nigeria.

Keywords: Althman's Model, improper expense recognition, incorrect asset valuation, return on assets

Procedia PDF Downloads 145
8370 Broadband Optical Plasmonic Antennas Using Fano Resonance Effects

Authors: Siamak Dawazdah Emami, Amin Khodaei, Harith Bin Ahmad, Hairul A. Adbul-Rashid

Abstract:

The Fano resonance effect on plasmonic nanoparticle materials results in such materials possessing a number of unique optical properties, and the potential applicability for sensing, nonlinear devices and slow-light devices. A Fano resonance is a consequence of coherent interference between superradiant and subradiant hybridized plasmon modes. Incident light on subradiant modes will initiate excitation that results in superradiant modes, and these superradient modes possess zero or finite dipole moments alongside a comparable negligible coupling with light. This research work details the derivation of an electrodynamics coupling model for the interaction of dipolar transitions and radiation via plasmonic nanoclusters such as quadrimers, pentamers and heptamers. The directivity calculation is analyzed in order to qualify the redirection of emission. The geometry of a configured array of nanostructures strongly influenced the transmission and reflection properties, which subsequently resulted in the directivity of each antenna being related to the nanosphere size and gap distances between the nanospheres in each model’s structure. A well-separated configuration of nanospheres resulted in the structure behaving similarly to monomers, with spectra peaks of a broad superradiant mode being centered within the vicinity of 560 nm wavelength. Reducing the distance between ring nanospheres in pentamers and heptamers to 20~60 nm caused the coupling factor and charge distributions to increase and invoke a subradiant mode centered within the vicinity of 690 nm. Increasing the outside ring’s nanosphere distance from the centered nanospheres caused the coupling factor to decrease, with the coupling factor being inversely proportional to cubic of the distance between nanospheres. This phenomenon led to a dramatic decrease of the superradiant mode at a 200 nm distance between the central nanosphere and outer rings. Effects from a superradiant mode vanished beyond a 240 nm distance between central and outer ring nanospheres.

Keywords: fano resonance, optical antenna, plasmonic, nano-clusters

Procedia PDF Downloads 421
8369 Investigation of Geothermal Gradient of the Niger Delta from Recent Studies

Authors: Adedapo Jepson Olumide, Kurowska Ewa, K. Schoeneich, Ikpokonte A. Enoch

Abstract:

In this paper, subsurface temperature measured from continuous temperature logs were used to determine the geothermal gradient of NigerDelta sedimentary basin. The measured temperatures were corrected to the true subsurface temperatures by applying the American Association of Petroleum Resources (AAPG) correction factor, borehole temperature correction factor with La Max’s correction factor and Zeta Utilities borehole correction factor. Geothermal gradient in this basin ranges from 1.20C to 7.560C/100m. Six geothermal anomalies centres were observed at depth in the southern parts of the Abakaliki anticlinorium around Onitsha, Ihiala, Umuaha area and named A1 to A6 while two more centre appeared at depth of 3500m and 4000m named A7 and A8 respectively. Anomaly A1 describes the southern end of the Abakaliki anticlinorium and extends southwards, anomaly A2 to A5 were found associated with a NW-SE structural alignment of the Calabar hinge line with structures describing the edge of the Niger Delta basin with the basement block of the Oban massif. Anomaly A6 locates in the south-eastern part of the basin offshore while A7 and A8 are located in the south western part of the basin offshore. At the average exploratory depth of 3500m, the geothermal gradient values for these anomalies A1, A2, A3, A4, A5, A6, A7, and A8 are 6.50C/100m, 1.750C/100m, 7.50C/100m, 1.250C/100m, 6.50C/100m, 5.50C/100m, 60C/100m, and 2.250C/100m respectively. Anomaly A8 area may yield higher thermal value at greater depth than 3500m. These results show that anomalies areas of A1, A3, A5, A6 and A7 are potentially prospective and explorable for geothermal energy using abandoned oil wells in the study area. Anomalies A1, A3.A5, A6 occur at areas where drilled boreholes were not exploitable for oil and gas but for the remaining areas where wells are so exploitable there appears no geothermal anomaly. Geothermal energy is environmentally friendly, clean and reversible.

Keywords: temperature logs, geothermal gradient anomalies, alternative energy, Niger delta basin

Procedia PDF Downloads 264
8368 Conjunctive Management of Surface and Groundwater Resources under Uncertainty: A Retrospective Optimization Approach

Authors: Julius M. Ndambuki, Gislar E. Kifanyi, Samuel N. Odai, Charles Gyamfi

Abstract:

Conjunctive management of surface and groundwater resources is a challenging task due to the spatial and temporal variability nature of hydrology as well as hydrogeology of the water storage systems. Surface water-groundwater hydrogeology is highly uncertain; thus it is imperative that this uncertainty is explicitly accounted for, when managing water resources. Various methodologies have been developed and applied by researchers in an attempt to account for the uncertainty. For example, simulation-optimization models are often used for conjunctive water resources management. However, direct application of such an approach in which all realizations are considered at each iteration of the optimization process leads to a very expensive optimization in terms of computational time, particularly when the number of realizations is large. The aim of this paper, therefore, is to introduce and apply an efficient approach referred to as Retrospective Optimization Approximation (ROA) that can be used for optimizing conjunctive use of surface water and groundwater over a multiple hydrogeological model simulations. This work is based on stochastic simulation-optimization framework using a recently emerged technique of sample average approximation (SAA) which is a sampling based method implemented within the Retrospective Optimization Approximation (ROA) approach. The ROA approach solves and evaluates a sequence of generated optimization sub-problems in an increasing number of realizations (sample size). Response matrix technique was used for linking simulation model with optimization procedure. The k-means clustering sampling technique was used to map the realizations. The methodology is demonstrated through the application to a hypothetical example. In the example, the optimization sub-problems generated were solved and analysed using “Active-Set” core optimizer implemented under MATLAB 2014a environment. Through k-means clustering sampling technique, the ROA – Active Set procedure was able to arrive at a (nearly) converged maximum expected total optimal conjunctive water use withdrawal rate within a relatively few number of iterations (6 to 7 iterations). Results indicate that the ROA approach is a promising technique for optimizing conjunctive water use of surface water and groundwater withdrawal rates under hydrogeological uncertainty.

Keywords: conjunctive water management, retrospective optimization approximation approach, sample average approximation, uncertainty

Procedia PDF Downloads 220
8367 Developing a GIS-Based Tool for the Management of Fats, Oils, and Grease (FOG): A Case Study of Thames Water Wastewater Catchment

Authors: Thomas D. Collin, Rachel Cunningham, Bruce Jefferson, Raffaella Villa

Abstract:

Fats, oils and grease (FOG) are by-products of food preparation and cooking processes. FOG enters wastewater systems through a variety of sources such as households, food service establishments, and industrial food facilities. Over time, if no source control is in place, FOG builds up on pipe walls, leading to blockages, and potentially to sewer overflows which are a major risk to the Environment and Human Health. UK water utilities spend millions of pounds annually trying to control FOG. Despite UK legislation specifying that discharge of such material is against the law, it is often complicated for water companies to identify and prosecute offenders. Hence, it leads to uncertainties regarding the attitude to take in terms of FOG management. Research is needed to seize the full potential of implementing current practices. The aim of this research was to undertake a comprehensive study to document the extent of FOG problems in sewer lines and reinforce existing knowledge. Data were collected to develop a model estimating quantities of FOG available for recovery within Thames Water wastewater catchments. Geographical Information System (GIS) software was used in conjunction to integrate data with a geographical component. FOG was responsible for at least 1/3 of sewer blockages in Thames Water waste area. A waste-based approach was developed through an extensive review to estimate the potential for FOG collection and recovery. Three main sources were identified: residential, commercial and industrial. Commercial properties were identified as one of the major FOG producers. The total potential FOG generated was estimated for the 354 wastewater catchments. Additionally, raw and settled sewage were sampled and analysed for FOG (as hexane extractable material) monthly at 20 sewage treatment works (STW) for three years. A good correlation was found with the sampled FOG and population equivalent (PE). On average, a difference of 43.03% was found between the estimated FOG (waste-based approach) and sampled FOG (raw sewage sampling). It was suggested that the approach undertaken could overestimate the FOG available, the sampling could only capture a fraction of FOG arriving at STW, and/or the difference could account for FOG accumulating in sewer lines. Furthermore, it was estimated that on average FOG could contribute up to 12.99% of the primary sludge removed. The model was further used to investigate the relationship between estimated FOG and number of blockages. The higher the FOG potential, the higher the number of FOG-related blockages is. The GIS-based tool was used to identify critical areas (i.e. high FOG potential and high number of FOG blockages). As reported in the literature, FOG was one of the main causes of sewer blockages. By identifying critical areas (i.e. high FOG potential and high number of FOG blockages) the model further explored the potential for source-control in terms of ‘sewer relief’ and waste recovery. Hence, it helped targeting where benefits from implementation of management strategies could be the highest. However, FOG is still likely to persist throughout the networks, and further research is needed to assess downstream impacts (i.e. at STW).

Keywords: fat, FOG, GIS, grease, oil, sewer blockages, sewer networks

Procedia PDF Downloads 193
8366 Microswitches with Sputtered Au, Aupd, Au-on-Aupt, and Auptcu Alloy - Electric Contacts

Authors: Nikolay Konukhov

Abstract:

This paper to report on a new analytic model for predicting microcontact resistance and the design, fabrication, and testing of microelectromechanical systems (MEMS) metal contact switches with sputtered bimetallic (i.e., gold (Au)-on-Au-platinum (Pt), (Au-on-Au-(6.3at%)Pt)), binary alloy (i.e., Au-palladium (Pd), (Au-(3.7at%)Pd)), and ternary alloy (i.e., Au-Pt-copper (Cu), (Au-(5.0at%)Pt-(0.5at%)Cu)) electric contacts. The microswitches with bimetallic and binary alloy contacts resulted in contact resistance values between 1–2

Keywords: alloys, electric contacts, microelectromechanical systems (MEMS), microswitch

Procedia PDF Downloads 158
8365 Geographic Information System and Ecotourism Sites Identification of Jamui District, Bihar, India

Authors: Anshu Anshu

Abstract:

In the red corridor famed for the Left Wing Extremism, lies small district of Jamui in Bihar, India. The district lies at 24º20´ N latitude and 86º13´ E longitude, covering an area of 3,122.8 km2 The undulating topography, with widespread forests provides pristine environment for invigorating experience of tourists. Natural landscape in form of forests, wildlife, rivers, and cultural landscape dotted with historical and religious places is highly purposive for tourism. The study is primarily related to the identification of potential ecotourism sites, using Geographic Information System. Data preparation, analysis and finally identification of ecotourism sites is done. Secondary data used is Survey of India Topographical Sheets with R.F.1:50,000 covering the area of Jamui district. District Census Handbook, Census of India, 2011; ERDAS Imagine and Arc View is used for digitization and the creation of DEM’s (Digital Elevation Model) of the district, depicting the relief and topography and generate thematic maps. The thematic maps have been refined using the geo-processing tools. Buffer technique has been used for the accessibility analysis. Finally, all the maps, including the Buffer maps were overlaid to find out the areas which have potential for the development of ecotourism sites in the Jamui district. Spatial data - relief, slopes, settlements, transport network and forests of Jamui District were marked and identified, followed by Buffer Analysis that was used to find out the accessibility of features like roads, railway stations to the sites available for the development of ecotourism destinations. Buffer analysis is also carried out to get the spatial proximity of major river banks, lakes, and dam sites to be selected for promoting sustainable ecotourism. Overlay Analysis is conducted using the geo-processing tools. Digital Terrain Model (DEM) generated and relevant themes like roads, forest areas and settlements were draped on the DEM to make an assessment of the topography and other land uses of district to delineate potential zones of ecotourism development. Development of ecotourism in Jamui faces several challenges. The district lies in the portion of Bihar that is part of ‘red corridor’ of India. The hills and dense forests are the prominent hideouts and training ground for the extremists. It is well known that any kind of political instability, war, acts of violence directly influence the travel propensity and hinders all kind of non-essential travels to these areas. The development of ecotourism in the district can bring change and overall growth in this area with communities getting more involved in economically sustainable activities. It is a known fact that poverty and social exclusion are the main force that pushes people, resorting towards violence. All over the world tourism has been used as a tool to eradicate poverty and generate good will among people. Tourism, in sustainable form should be promoted in the district to integrate local communities in the development process and to distribute fruits of development with equity.

Keywords: buffer analysis, digital elevation model, ecotourism, red corridor

Procedia PDF Downloads 249
8364 The Multidisciplinary Treatment in Residence Care Clinic for Treatment of Feeding and Eating Disorders

Authors: Yuri Melis, Mattia Resteghini, Emanuela Apicella, Eugenia Dozio, Leonardo Mendolicchio

Abstract:

Aim: This retrospective study was created to analyze the psychometric, anthropometric and body composition values in patients at the beginning and the discharge of their of hospitalization in the residential care clinic for eating and feeding disorders (EFD’s). Method: The sample was composed by (N=59) patients with mean age N= 33,50, divided in subgroups: Anorexia Nervosa (AN) (N=28), Bulimia Nervosa (BN) (N=13) and Binge Eating Disorders (BED) (N=14) recruited from a residential care clinic for eating and feeding disorders. The psychometrics level was measured with self-report questionnaires: Eating Disorders Inventory-3 (EDI-3) The Body Uneasiness Test (BUT), Minnesota Multiphasic Personality Inventory (MMPI – 2). The anthropometric and nutritional values was collected by Body Impedance Assessment (B.I.A), Body mass index (B.M.I.). Measurements were made at the beginning and at the end of hospitalization, with an average time of recovery of about 8,6 months. Results: The all data analysis showed a statistical significance (p-value >0,05 | power size N=0,950) in variation from T0 (start of recovery) to T1 (end of recovery) in the clinical scales of MMPI-2, AN group (Hypocondria T0 64,14 – T1 56,39) (Depression T0 72,93 – T1 59,50) (Hysteria T0 61,29 – T1 56,17) (Psychopathic deviation T0 64,00 – T1 60,82) (Paranoia T0 63,82 – T1 56,14) (Psychasthenia T0 63,82 – T1 57,86) (Schizophrenia T0 64,68 – T1 60,43) (Obsessive T0 60,36 – T1 55,68); BN group (Hypocondria T0 64,08 – T1 47,54) (Depression T0 67,46 – T1 52,46) (Hysteria T0 60,62 – T1 47,84) (Psychopathic deviation T0 65,69 – T1 58,92) (Paranoia T0 67,46 – T1 55,23) (Psychasthenia T0 60,77 – T1 53,77) (Schizophrenia T0 64,68 – T1 60,43) (Obsessive T0 62,92 – T1 54,08); B.E.D groups (Hypocondria T0 59,43 – T1 53,14) (Depression T0 66,71 – T1 54,57) (Hysteria T0 59,86 – T1 53,82) (Psychopathic deviation T0 67,39 – T1 59,03) (Paranoia T0 58,57 – T1 53,21) (Psychasthenia T0 61,43 – T1 53,00) (Schizophrenia T0 62,29 – T1 56,36) (Obsessive T0 58,57 – T1 48,64). EDI-3 report mean value is higher than clinical cut-off at T0, in T1, there is a significant reduction of the general mean of value. The same result is present in the B.U.T. test in the difference between T0 to T1. B.M.I mean value in AN group is (T0 14,83 – T1 18,41) BN group (T0 20 – T1 21,33) BED group (T0 42,32 – T1 34,97) Phase Angle results: AN group (T0 4,78 – T1 5,64) BN (T0 6 – T1 6,53) BED group (T0 6 – T1 6,72). Discussion and conclusion: The evident presence that on the whole sample, we have an altered serious psychiatric and clinic conditions at the beginning of recovery. The interesting conclusions that we can draw from this analysis are that a multidisciplinary approach that includes the entire care of the subject: from the pharmacological treatment, analytical psychotherapy, Psychomotricity, nutritional rehabilitation, and rehabilitative, educational activities. Thus, this Multidisciplinary treatment allows subjects in our sample to be able to restore psychopathological and metabolic values to below the clinical cut-off.

Keywords: feeding and eating disorders, anorexia nervosa, care clinic treatment, multidisciplinary treatment

Procedia PDF Downloads 113
8363 An Evaluation Study of Sleep and Sleep-Related Factors in Clinic Clients with Sleep Difficulties

Authors: Chi-Feng Lai, Wen-Chun Liao Liao

Abstract:

Many people are bothered by sleep difficulties in Taiwan’s society. However, majority of patients get medical treatments without a comprehensive sleep assessment. It is still a big challenge to formulate a comprehensive assessment of sleep difficulties in clinical settings, even though many assessment tools have existed in literature. This study tries to implement reliable and effective ‘comprehensive sleep assessment scales’ in a medical center and to explore differences in sleep-related factors between clinic clients with or without sleep difficulty complaints. The comprehensive sleep assessment (CSA) scales were composed of 5 dimensions: ‘personal factors’, ‘physiological factors’, ‘psychological factors’, ‘social factors’ and ‘environmental factors, and were first evaluated by expert validity and 20 participants with test-retest reliability. The Content Validity Index (CVI) of the CSA was 0.94 and the alpha of the consistency reliability ranged 0.996-1.000. Clients who visited sleep clinic due to sleep difficulties (n=32, 16 males and 16 females, ages 43.66 ±14.214) and gender-and age- matched healthy subjects without sleep difficulties (n=96, 47 males and 49 females, ages 41.99 ±13.69) were randomly recruited at a ratio of 1:3 (with sleep difficulties vs. without sleep difficulties) to compare their sleep and the CSA factors. Results show that all clinic clients with sleep difficulties did have poor sleep quality (PSQI>5) and mild to moderate daytime sleepiness (ESS >11). Personal factors of long working hours (χ2= 10.315, p=0.001), shift workers (χ2= 8.964, p=0.003), night shift (χ2=9.395, p=0.004) and perceived stress (χ2=9.503, p=0.002) were disruptors of sleep difficulties. Physiological factors from physical examination including breathing by mouth, low soft palate, high narrow palate, Edward Angle, tongue hypertrophy, and occlusion of the worn surface were observed in clinic clients. Psychological factors including higher perceived stress (χ2=32.542, p=0.000), anxiety and depression (χ2=32.868, p=0.000); social factors including lack of leisure activities (χ2=39.857, p=0.000), more drinking habits (χ2=1.798, p=0.018), irregular amount and frequency in meals (χ2=5.086, p=0.024), excessive dinner (χ2=21.511, p=0.000), being incapable of getting up on time due to previous poor night sleep (χ2=4.444, p=0.035); and environmental factors including lights (χ2=7.683, p=0.006), noise (χ2=5.086, p=0.024), low or high bedroom temperature (χ2=4.595, p=0.032) were existed in clients. In conclusion, the CSA scales can work as valid and reliable instruments for evaluating sleep-related factors. Findings of this study provide important reference for assessing clinic clients with sleep difficulties.

Keywords: comprehensive sleep assessment, sleep-related factors, sleep difficulties

Procedia PDF Downloads 262
8362 Interfacial Instability and Mixing Behavior between Two Liquid Layers Bounded in Finite Volumes

Authors: Lei Li, Ming M. Chai, Xiao X. Lu, Jia W. Wang

Abstract:

The mixing process of two liquid layers in a cylindrical container includes the upper liquid with higher density rushing into the lower liquid with lighter density, the lower liquid rising into the upper liquid, meanwhile the two liquid layers having interactions with each other, forming vortices, spreading or dispersing in others, entraining or mixing with others. It is a complex process constituted of flow instability, turbulent mixing and other multiscale physical phenomena and having a fast evolution velocity. In order to explore the mechanism of the process and make further investigations, some experiments about the interfacial instability and mixing behavior between two liquid layers bounded in different volumes are carried out, applying the planar laser induced fluorescence (PLIF) and the high speed camera (HSC) techniques. According to the results, the evolution of interfacial instability between immiscible liquid develops faster than theoretical rate given by the Rayleigh-Taylor Instability (RTI) theory. It is reasonable to conjecture that some mechanisms except the RTI play key roles in the mixture process of two liquid layers. From the results, it is shown that the invading velocity of the upper liquid into the lower liquid does not depend on the upper liquid's volume (height). Comparing to the cases that the upper and lower containers are of identical diameter, in the case that the lower liquid volume increases to larger geometric space, the upper liquid spreads and expands into the lower liquid more quickly during the evolution of interfacial instability, indicating that the container wall has important influence on the mixing process. In the experiments of miscible liquid layers’ mixing, the diffusion time and pattern of the liquid interfacial mixing also does not depend on the upper liquid's volumes, and when the lower liquid volume increases to larger geometric space, the action of the bounded wall on the liquid falling and rising flow will decrease, and the liquid interfacial mixing effects will also attenuate. Therefore, it is also concluded that the volume weight of upper heavier liquid is not the reason of the fast interfacial instability evolution between the two liquid layers and the bounded wall action is limited to the unstable and mixing flow. The numerical simulations of the immiscible liquid layers’ interfacial instability flow using the VOF method show the typical flow pattern agree with the experiments. However the calculated instability development is much slower than the experimental measurement. The numerical simulation of the miscible liquids’ mixing, which applying Fick’s diffusion law to the components’ transport equation, shows a much faster mixing rate than the experiments on the liquids’ interface at the initial stage. It can be presumed that the interfacial tension plays an important role in the interfacial instability between the two liquid layers bounded in finite volume.

Keywords: interfacial instability and mixing, two liquid layers, Planar Laser Induced Fluorescence (PLIF), High Speed Camera (HSC), interfacial energy and tension, Cahn-Hilliard Navier-Stokes (CHNS) equations

Procedia PDF Downloads 229