Search results for: test case generation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6773

Search results for: test case generation

113 Modelling Forest Fire Risk in the Goaso Forest Area of Ghana: Remote Sensing and Geographic Information Systems Approach

Authors: Bernard Kumi-Boateng, Issaka Yakubu

Abstract:

Forest fire, which is, an uncontrolled fire occurring in nature has become a major concern for the Forestry Commission of Ghana (FCG). The forest fires in Ghana usually result in massive destruction and take a long time for the firefighting crews to gain control over the situation. In order to assess the effect of forest fire at local scale, it is important to consider the role fire plays in vegetation composition, biodiversity, soil erosion, and the hydrological cycle. The occurrence, frequency and behaviour of forest fires vary over time and space, primarily as a result of the complicated influences of changes in land use, vegetation composition, fire suppression efforts, and other indigenous factors. One of the forest zones in Ghana with a high level of vegetation stress is the Goaso forest area. The area has experienced changes in its traditional land use such as hunting, charcoal production, inefficient logging practices and rural abandonment patterns. These factors which were identified as major causes of forest fire, have recently modified the incidence of fire in the Goaso area. In spite of the incidence of forest fires in the Goaso forest area, most of the forest services do not provide a cartographic representation of the burned areas. This has resulted in significant amount of information being required by the firefighting unit of the FCG to understand fire risk factors and its spatial effects. This study uses Remote Sensing and Geographic Information System techniques to develop a fire risk hazard model using the Goaso Forest Area (GFA) as a case study. From the results of the study, natural forest, agricultural lands and plantation cover types were identified as the major fuel contributing loads. However, water bodies, roads and settlements were identified as minor fuel contributing loads. Based on the major and minor fuel contributing loads, a forest fire risk hazard model with a reasonable accuracy has been developed for the GFA to assist decision making.

Keywords: Forest risk, GIS, remote sensing, Goaso.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937
112 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour

Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.

Keywords: Artificial neural network, back-propagation, tide data, training algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672
111 Research Regarding Resistance Characteristics of Biscuits Assortment Using Cone Penetrometer

Authors: G.–A. Constantin, G. Voicu, E.–M. Stefan, P. Tudor, G. Paraschiv, M.–G. Munteanu

Abstract:

In the activity of handling and transport of food products, the products may be subjected to mechanical stresses that may lead to their deterioration by deformation, breaking, or crushing. This is the case for biscuits, regardless of their type (gluten-free or sugary), the addition of ingredients or flour from which they are made. However, gluten-free biscuits have a higher mechanical resistance to breakage or crushing compared to easily shattered sugar biscuits (especially those for children). The paper presents the results of the experimental evaluation of the texture for four varieties of commercial biscuits, using the penetrometer equipped with needle cone at five different additional weights on the cone-rod. The assortments of biscuits tested in the laboratory were Petit Beurre, Picnic, and Maia (all three manufactured by RoStar, Romania) and Sultani diet biscuits, manufactured by Eti Burcak Sultani (Turkey, in packs of 138 g). For the four varieties of biscuits and the five additional weights (50, 77, 100, 150 and 177 g), the experimental data obtained were subjected to regression analysis in the MS Office Excel program, using Velon's relationship (h = a∙ln(t) + b). The regression curves were analysed comparatively in order to identify possible differences and to highlight the variation of the penetration depth h, in relation to the time t. Based on the penetration depth between two-time intervals (every 5 seconds), the curves of variation of the penetration speed in relation to time were then drawn. It was found that Velon's law verifies the experimental data for all assortments of biscuits and for all five additional weights. The correlation coefficient R2 had in most of the analysed cases values over 0.850. The values recorded for the penetration depth were framed, in general, within 45-55 p.u. (penetrometric units) at an additional mass of 50 g, respectively between 155-168 p.u., at an additional mass of 177 g, at Petit Beurre biscuits. For Sultani diet biscuits, the values of the penetration depth were within the limits of 32-35 p.u., at an additional weight of 50 g and between 80-114 p.u., at an additional weight of 177g. The data presented in the paper can be used by both operators on the manufacturing technology flow, as well as by the traders of these food products, in order to establish the most efficient parametric of the working regimes (when packaging and handling).

Keywords: Biscuits resistance/texture, penetration depth, penetration velocity, sharp pin penetrometer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 564
110 Reflective Thinking and Experiential Learning: A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities and Greater Integration of Student Profiles

Authors: P. Bogas

Abstract:

As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences resulted from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the students' response can be described as: students who reinforce the initial deep approach, students who maintain the initial deep approach level and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to a possible adoption of deep approaches to learning, since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding to the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself and, on the other hand, the additional effort that this practice required for some of the students.

Keywords: Experiential learning, higher education, marketing, mixed methods, reflective thinking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 243
109 Physicians’ Knowledge and Perception of Gene Profiling in Malaysia

Authors: Farahnaz Amini, Woo Yun Kin, Lazwani Kolandaiveloo

Abstract:

Availability of different genetic tests after completion of Human Genome Project increases the physicians’ responsibility to keep themselves update on the potential implementation of these genetic tests in their daily practice. However, due to numbers of barriers, still many of physicians are not either aware of these tests or are not willing to offer or refer their patients for genetic tests. This study was conducted an anonymous, cross-sectional, mailed-based survey to develop a primary data of Malaysian physicians’ level of knowledge and perception of gene profiling. Questionnaire had 29 questions. Total scores on selected questions were used to assess the level of knowledge. The highest possible score was 11. Descriptive statistics, one way ANOVA and chi-squared test was used for statistical analysis. Sixty three completed questionnaires were returned by 27 general practitioners (GPs) and 36 medical specialists. Responders’ age ranges from 24 to 55 years old (mean 30.2 ± 6.4). About 40% of the participants rated themselves as having poor level of knowledge in genetics in general whilst 60% believed that they have fair level of knowledge; however, almost half (46%) of the respondents felt that they were not knowledgeable about available genetic tests. A majority (94%) of the responders were not aware of any lab or company which is offering gene profiling services in Malaysia. Only 4% of participants were aware of using gene profiling for detection of dosage of some drugs. Respondents perceived greater utility of gene profiling for breast cancer (38%) compared to the colorectal familial cancer (3%). The score of knowledge ranged from 2 to 8 (mean 4.38 ± 1.67). Non- significant differences between score of knowledge of GPs and specialists were observed, with score of 4.19 and 4.58 respectively. There was no significant association between any demographic factors and level of knowledge. However, those who graduated between years 2001 to 2005 had higher level of knowledge. Overall, 83% of participants showed relatively high level of perception on value of gene profiling to detect patient’s risk of disease. However, low perception was observed for both statements of using gene profiling for general population in order to alter their lifestyle (25%) as well as having the full sequence of a patient genome for the purpose of determining a patient’s best match for treatment (18%). The lack of clinical guidelines, limited provider knowledge and awareness, lack of time and resources to educate patients, lack of evidence-based clinical information and cost of tests were the most barriers of ordering gene profiling mentioned by physicians. In conclusion Malaysian physicians who participate in this study had mediocre level of knowledge and awareness in gene profiling. The low exposure to the genetic questions and problems might be a key predictor of lack of awareness and knowledge on available genetic tests. Educational and training workshop might be useful in helping Malaysian physicians incorporate genetic profiling into practice for eligible patients.

Keywords: Gene Profiling, Knowledge, Malaysia, Physician.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
108 Italians- Social and Emotional Loneliness: The Results of Five Studies

Authors: Vanda Lucia Zammuner

Abstract:

Subjective loneliness describes people who feel a disagreeable or unacceptable lack of meaningful social relationships, both at the quantitative and qualitative level. The studies to be presented tested an Italian 18-items self-report loneliness measure, that included items adapted from scales previously developed, namely a short version of the UCLA (Russell, Peplau and Cutrona, 1980), and the 11-items Loneliness scale by De Jong-Gierveld & Kamphuis (JGLS; 1985). The studies aimed at testing the developed scale and at verifying whether loneliness is better conceptualized as a unidimensional (so-called 'general loneliness') or a bidimensional construct, namely comprising the distinct facets of social and emotional loneliness. The loneliness questionnaire included 2 singleitem criterion measures of sad mood, and social contact, and asked participants to supply information on a number of socio-demographic variables. Factorial analyses of responses obtained in two preliminary studies, with 59 and 143 Italian participants respectively, showed good factor loadings and subscale reliability and confirmed that perceived loneliness has clearly two components, a social and an emotional one, the latter measured by two subscales, a 7-item 'general' loneliness subscale derived from UCLA, and a 6–item 'emotional' scale included in the JGLS. Results further showed that type and amount of loneliness are related, negatively, to frequency of social contacts, and, positively, to sad mood. In a third study data were obtained from a nation-wide sample of 9.097 Italian subjects, 12 to about 70 year-olds, who filled the test on-line, on the Italian web site of a large-audience magazine, Focus. The results again confirmed the reliability of the component subscales, namely social, emotional, and 'general' loneliness, and showed that they were highly correlated with each other, especially the latter two. Loneliness scores were significantly predicted by sex, age, education level, sad mood and social contact, and, less so, by other variables – e.g., geographical area and profession. The scale validity was confirmed by the results of a fourth study, with elderly men and women (N 105) living at home or in residential care units. The three subscales were significantly related, among others, to depression, and to various measures of the extension of, and satisfaction with, social contacts with relatives and friends. Finally, a fifth study with 315 career-starters showed that social and emotional loneliness correlate with life satisfaction, and with measures of emotional intelligence. Altogether the results showed a good validity and reliability in the tested samples of the entire scale, and of its components.

Keywords: Emotional loneliness, social loneliness, scale development and testing, life span and cultural differences.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2937
107 Streamwise Vorticity in the Wake of a Sliding Bubble

Authors: R. O’Reilly Meehan, D. B. Murray

Abstract:

In many practical situations, bubbles are dispersed in a liquid phase. Understanding these complex bubbly flows is therefore a key issue for applications such as shell and tube heat exchangers, mineral flotation and oxidation in water treatment. Although a large body of work exists for bubbles rising in an unbounded medium, that of bubbles rising in constricted geometries has received less attention. The particular case of a bubble sliding underneath an inclined surface is common to two-phase flow systems. The current study intends to expand this knowledge by performing experiments to quantify the streamwise flow structures associated with a single sliding air bubble under an inclined surface in quiescent water. This is achieved by means of two-dimensional, two-component particle image velocimetry (PIV), performed with a continuous wave laser and high-speed camera. PIV vorticity fields obtained in a plane perpendicular to the sliding surface show that there is significant bulk fluid motion away from the surface. The associated momentum of the bubble means that this wake motion persists for a significant time before viscous dissipation. The magnitude and direction of the flow structures in the streamwise measurement plane are found to depend on the point on its path through which the bubble enters the plane. This entry point, represented by a phase angle, affects the nature and strength of the vortical structures. This study reconstructs the vorticity field in the wake of the bubble, converting the field at different instances in time to slices of a large-scale wake structure. This is, in essence, Taylor’s ”frozen turbulence” hypothesis. Applying this to the vorticity fields provides a pseudo three-dimensional representation from 2-D data, allowing for a more intuitive understanding of the bubble wake. This study provides insights into the complex dynamics of a situation common to many engineering applications, particularly shell and tube heat exchangers in the nucleate boiling regime.

Keywords: Bubbly flow, particle image velocimetry, two-phase flow, wake structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880
106 Assessing Applicability of Kevin Lynch’s Framework of The Image of the City in the Case of the Walled City of Jaipur

Authors: Jay Patel

Abstract:

This research is about investigating the ‘image’ of the city, and asks whether this ‘image’ holds any significance that can be changed. Kevin Lynch in the book ‘The Image of the City’ develops a framework that breaks down the city’s image into five physical elements. These elements (Paths, Edge, Nodes, Districts, and Landmarks), according to Lynch assess the legibility of the urbanscapes, that emerged from his perception-based study in three different cities (New Jersey, Los Angeles, and Boston) in the USA. The aim of this research is to investigate whether Lynch’s framework can be applied within an Indian context or not. If so, what are the possibilities and whether the imageability of Indian cities can be depicted through the Lynch’s physical elements or it demands an extension to the framework by either adding or subtracting a physical attribute. For this research project, the walled city of Jaipur was selected, as it is considered one of the futuristic designed cities of all time in India. The other significant reason for choosing Jaipur was that it is a historically planned city with solid historical, touristic and local importance; allowing an opportunity to understand the application of Lynch's elements to the city's image. In other words, it provides an opportunity to examine how the disadvantages of a city's implicit program (its relics of bygone eras) can be converted into assets by improving the imageability of the city. To obtain data, a structured semi-open ended interview method was chosen. The reason for selecting this method explicitly was to gain qualitative data from the users rather than collecting quantitative data from closed-ended questions. This allowed in-depth understanding and applicability of Kevin Lynch’s framework while assessing what needs to be added. The interviews were conducted in Jaipur that yielded varied inferences that were different from the expected learning outcomes, highlighting the need for extension on Lynch’s physical elements to achieve city’s image. Whilst analyzing the data, there were few attributes found that defined the image of Jaipur. These were categorized into two: a Physical aspect (streets and arcade entities, natural features, temples and temporary/informal activities) and Associational aspects (History, culture and tradition, medium of help in wayfinding, and intangible aspects).

Keywords: Imageability, Kevin Lynch, People’s Perception, associational aspects, physical aspects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 352
105 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation

Authors: Constantin Z. Leshan

Abstract:

Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.

Keywords: Border of the universe, causality violation, perfect isolation, quantum jumps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1190
104 A Piscan Ulcerative Aeromonas Infection

Authors: Ibrahim M. S. Shnawa, Bashar A. H. E. Alsadi, Kalida K. Alniaem

Abstract:

In the immunologic sense, clinical infection is a state of failure of the immune system to combat the pathogenic weapon of the bacteria invading the host. A motile gram negative vibroid organism associated with marked mono and poly nuclear cell responses was traced during the examination of a clinical material from an infected common carp Cyprinus carpio. On primary plate culture, growth was shown to be pure, dense population of an Aeromonas-like colony morphotype. The pure isolate was found to be; Aerobic, facultatively anaerobic, non-halophilic, grew at 0C, and 37C, oxidase positive utilizes glucose through fermentative pathway, resist 0/129 and novobiocin, produces alanine and lysine decarboxylases but non-producing ornithine dehydrolases. Tests for the in vitro determinants of pathogenicity has shown to be; Betahaemolytic onto blood agar, gelatinase, casienase and amylase producer. Three in vivo determinants of pathogenicity were tested as, the lethal dose fifty, the pathogenesis and pathogenicity. It was evident that 0.1 milliliter of the causal bacterial cell suspension of a density 1 x 107 CFU/ml injected intramuscularly into an average of 100gms fish toke five days incubation period, then at the day six morbidity and mortality were initiated. LD50 was recorded at the day 12 post-infection. Use of an LD50 doses to study the pathogenicity, reveals mononuclear and polynuclear cell responses, on examining the stained direct films of the clinical materials from the experimentally infected fish. Re-isolation tests confirm that the reisolant is same. The course of the infection in natural case was shown manifestation of; skin ulceration, haemorrhage and descaling. On evisceration, the internal organs were shown; congestion in the intestines, spleen and, air sacs. The induced infection showed a milder form of these manifestations. The grading of the virulence of this organism was virulent causing chronic course of infections as indicated from the pathogenesis and pathogenicity studies. Thus the infectious bacteria were consistent with Aeromonas hydrophila, and the infection was chronic.

Keywords: Piscan, inflammatory respnonse, pure culture, pathogen, chronic, infection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997
103 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners

Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani

Abstract:

Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.

Keywords: Accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1117
102 Compliance Modelling and Optimization of Kerf during WEDM of Al7075/SiCP Metal Matrix Composite

Authors: Thella Babu Rao, A. Gopala Krishna

Abstract:

This investigation presents the formulation of kerf (width of slit) and optimal control parameter settings of wire electrochemical discharge machining which results minimum possible kerf while machining Al7075/SiCp MMCs. WEDM is proved its efficiency and effectiveness to cut the hard ceramic reinforced MMCs within the permissible budget. Among the distinct performance measures of WEDM process, kerf is an important performance characteristic which determines the dimensional accuracy of the machined component while producing high precision components. The lack of available of the machinability information such advanced MMCs result the more experimentation in the manufacturing industries. Therefore, extensive experimental investigations are essential to provide the database of effect of various control parameters on the kerf while machining such advanced MMCs in WEDM. Literature reviled the significance some of the electrical parameters which are prominent on kerf for machining distinct conventional materials. However, the significance of reinforced particulate size and volume fraction on kerf is highlighted in this work while machining MMCs along with the machining parameters of pulse-on time, pulse-off time and wire tension. Usually, the dimensional tolerances of machined components are decided at the design stage and a machinist pay attention to produce the required dimensional tolerances by setting appropriate machining control variables. However, it is highly difficult to determine the optimal machining settings for such advanced materials on the shop floor. Therefore, in the view of precision of cut, kerf (cutting width) is considered as the measure of performance for the model. It was found from the literature that, the machining conditions of higher fractions of large size SiCp resulting less kerf where as high values of pulse-on time result in a high kerf. A response surface model is used to predict the relative significance of various control variables on kerf. Consequently, a powerful artificial intelligence called genetic algorithms (GA) is used to determine the best combination of the control variable settings. In the next step the conformation test was conducted for the optimal parameter settings and found good agreement between the GA kerf and measured kerf. Hence, it is clearly reveal that the effectiveness and accuracy of the developed model and program to analyze the kerf and to determine its optimal process parameters. The results obtained in this work states that, the resulted optimized parameters are capable of machining the Al7075/SiCp MMCs more efficiently and with better dimensional accuracy.

Keywords: Al7075SiCP MMC, kerf, WEDM, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1976
101 Potential of High Performance Ring Spinning Based on Superconducting Magnetic Bearing

Authors: M. Hossain, A. Abdkader, C. Cherif, A. Berger, M. Sparing, R. Hühne, L. Schultz, K. Nielsch

Abstract:

Due to the best quality of yarn and the flexibility of the machine, the ring spinning process is the most widely used spinning method for short staple yarn production. However, the productivity of these machines is still much lower in comparison to other spinning systems such as rotor or air-jet spinning process. The main reason for this limitation lies on the twisting mechanism of the ring spinning process. In the ring/traveler twisting system, each rotation of the traveler along with the ring inserts twist in the yarn. The rotation of the traveler at higher speed includes strong frictional forces, which in turn generates heat. Different ring/traveler systems concerning with its geometries, material combinations and coatings have already been implemented to solve the frictional problem. However, such developments can neither completely solve the frictional problem nor increase the productivity. The friction free superconducting magnetic bearing (SMB) system can be a right alternative replacing the existing ring/traveler system. The unique concept of SMB bearings is that they possess a self-stabilizing behavior, i.e. they remain fully passive without any necessity for expensive position sensing and control. Within the framework of a research project funded by German research foundation (DFG), suitable concepts of the SMB-system have been designed, developed, and integrated as a twisting device of ring spinning replacing the existing ring/traveler system. With the help of the developed mathematical model and experimental investigation, the physical limitations of this innovative twisting device in the spinning process have been determined. The interaction among the parameters of the spinning process and the superconducting twisting element has been further evaluated, which derives the concrete information regarding the new spinning process. Moreover, the influence of the implemented SMB twisting system on the yarn quality has been analyzed with respect to different process parameters. The presented work reveals the enormous potential of the innovative twisting mechanism, so that the productivity of the ring spinning process especially in case of thermoplastic materials can be at least doubled for the first time in a hundred years. The SMB ring spinning tester has also been presented in the international fair “International Textile Machinery Association (ITMA) 2015”.

Keywords: Ring spinning, superconducting magnetic bearing, yarn properties, productivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 879
100 Assessing the Impact of High Fidelity Human Patient Simulation on Teamwork among Nursing, Medicine and Pharmacy Undergraduate Students

Authors: S. MacDonald, A. Manuel, R. Law, N. Bandruak, A. Dubrowski, V. Curran, J. Smith-Young, K. Simmons, A. Warren

Abstract:

High fidelity human patient simulation has been used for many years by health sciences education programs to foster critical thinking, engage learners, improve confidence, improve communication, and enhance psychomotor skills. Unfortunately, there is a paucity of research on the use of high fidelity human patient simulation to foster teamwork among nursing, medicine and pharmacy undergraduate students. This study compared the impact of high fidelity and low fidelity simulation education on teamwork among nursing, medicine and pharmacy students. For the purpose of this study, two innovative teaching scenarios were developed based on the care of an adult patient experiencing acute anaphylaxis: one high fidelity using a human patient simulator and one low fidelity using case based discussions. A within subjects, pretest-posttest, repeated measures design was used with two-treatment levels and random assignment of individual subjects to teams of two or more professions. A convenience sample of twenty-four (n=24) undergraduate students participated, including: nursing (n=11), medicine (n=9), and pharmacy (n=4). The Interprofessional Teamwork Questionnaire was used to assess for changes in students’ perception of their functionality within the team, importance of interprofessional collaboration, comprehension of roles, and confidence in communication and collaboration. Student satisfaction was also assessed. Students reported significant improvements in their understanding of the importance of interprofessional teamwork and of the roles of nursing and medicine on the team after participation in both the high fidelity and the low fidelity simulation. However, only participants in the high fidelity simulation reported a significant improvement in their ability to function effectively as a member of the team. All students reported that both simulations were a meaningful learning experience and all students would recommend both experiences to other students. These findings suggest there is merit in both high fidelity and low fidelity simulation as a teaching and learning approach to foster teamwork among undergraduate nursing, medicine and pharmacy students. However, participation in high fidelity simulation may provide a more realistic opportunity to practice and function as an effective member of the interprofessional health care team.

Keywords: Acute anaphylaxis, high fidelity human patient simulation, low fidelity simulation, interprofessional education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 896
99 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis

Abstract:

E-maintenance is a relatively recent concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This is clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification, cellular connectivity, connectivity to the vehicle computer, and connectivity to analog and digital sensors by means of a specially targeted design of expansion board. Specifically, the latter offers a number of adaptability features to cope with the diverse sensor types employed in different vehicles. In standard mode, the IoT sensor node communicates to the data center through cellular network, transmitting all digital/digitized sensor data, IoT device identity and position. Moreover, the proposed IoT sensor node offers connectivity, through WiFi and an appropriate application, to smart phones or tablets allowing the registration of additional vehicle- and driver-specific information and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware.

Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 510
98 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Cheima Ben Soltane, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: Feature Extraction, Speaker Modeling, Feature Matching, Mel Frequency Cepstrum Coefficient (MFCC), Gaussian mixture model (GMM), Vector Quantization (VQ), Linde-Buzo-Gray (LBG), Expectation Maximization (EM), pre-processing, Voice Activity Detection (VAD), Short Time Energy (STE), Background Noise Statistical Modeling, Closed-Set Tex-Independent Speaker Identification System (CISI).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
97 High Efficiency Solar Thermal Collectors Utilization in Process Heat: A Case Study of Textile Finishing Industry

Authors: Gökçen A. Çiftçioğlu, M. A. Neşet Kadırgan, Figen Kadırgan

Abstract:

Solar energy, since it is available every day, is seen as one of the most valuable renewable energy resources. Thus, the energy of sun should be efficiently used in various applications. The most known applications that use solar energy are heating water and spaces. High efficiency solar collectors need appropriate selective surfaces to absorb the heat. Selective surfaces (Selektif-Sera) used in this study are applied to flat collectors, which are produced by a roll to roll cost effective coating of nano nickel layers, developed in Selektif Teknoloji Co. Inc. Efficiency of flat collectors using Selektif-Sera absorbers are calculated in collaboration with Institute for Solar Technik Rapperswil, Switzerland. The main cause of high energy consumption in industry is mostly caused from low temperature level processes. There is considerable effort in research to minimize the energy use by renewable energy sources such as solar energy. A feasibility study will be presented to obtain the potential of solar thermal energy utilization in the textile industry using these solar collectors. For the feasibility calculations presented in this study, textile dyeing and finishing factory located at Kahramanmaras is selected since the geographic location was an important factor. Kahramanmaras is located in the south east part of Turkey thus has a great potential to have solar illumination much longer. It was observed that, the collector area is limited by the available area in the factory, thus a hybrid heating generating system (lignite/solar thermal) was preferred in the calculations of this study to be more realistic. During the feasibility work, the calculations took into account the preheating process, where well waters heated from 15 °C to 30-40 °C by using the hot waters in heat exchangers. Then the preheated water was heated again by high efficiency solar collectors. Economic comparison between the lignite use and solar thermal collector use was provided to determine the optimal system that can be used efficiently. The optimum design of solar thermal systems was studied depending on the optimum collector area. It was found that the solar thermal system is more economic and efficient than the merely lignite use. Return on investment time is calculated as 5.15 years.

Keywords: Solar energy, heating, solar heating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1193
96 Comparative Study of Sedimentation in Hydraulic Structures using Sharc and Ssiim Soft Wares - A Case of the Dez and Hamidieh Intake Structures in Iran

Authors: A.H. Sajedipoor, N. Hedayat , M. Mashal, R. Nazarzadeh

Abstract:

Sedimentation formation is a complex hydraulic phenomenon that has emerged as a major operational and maintenance consideration in modern hydraulic engineering in general and river engineering in particular. Sediments accumulation along the river course and their eventual storage in a form of islands affect water intake in the canal systems that are fed by the storage reservoirs. Without proper management, sediment transport can lead to major operational challenges in water distribution system of arid regions like the Dez and Hamidieh command areas. The paper aims to investigate sedimentation in the Western Canal of Dez Diversion Weir using the SHARC model and compare the results with the two intake structures of the Hamidieh dam in Iran using SSIIM model. The objective was to identify the factors which influence the process, check reliability of outcome and provide ways in which to mitigate the implications on operation and maintenance of the structures. Results estimated sand and silt bed loads concentrations to be 193 ppm and 827ppm respectively. This followed ,ore or less similar pattern in Hamidieh where the sediment formation impeded water intake in the canal system. Given the available data on average annual bed loads and average suspended sediment loads of 165ppm and 837ppm in the Dez, there was a significant statistical difference (16%) between the sand grains, whereas no significant difference (1.2%) was find in the silt grain sizes. One explanation for such finding being that along the 6 Km river course there was considerable meandering effects which explains recent shift in the hydraulic behavior along the stream course under investigation. The sand concentration in downstream relative to present state of the canal showed a steep descending curve. Sediment trapping on the other hand indicated a steep ascending curve. These occurred because the diversion weir was not considered in the simulation model. The comparative study showed very close similarities in the results which explains the fact that both software can be used as accurate and reliable analytical tools for simulation of the sedimentation in hydraulic engineering.

Keywords: SHARC, SSIIM, sedimentation, Dez diversion weir, Hamidieh dam, Intake structures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716
95 Production of Pig Iron by Smelting of Blended Pre-Reduced Titaniferous Magnetite Ore and Hematite Ore Using Lean Grade Coal

Authors: Bitan Kumar Sarkar, Akashdeep Agarwal, Rajib Dey, Gopes Chandra Das

Abstract:

The rapid depletion of high-grade iron ore (Fe2O3) has gained attention on the use of other sources of iron ore. Titaniferous magnetite ore (TMO) is a special type of magnetite ore having high titania content (23.23% TiO2 present in this case). Due to high TiO2 content and high density, TMO cannot be treated by the conventional smelting reduction. In this present work, the TMO has been collected from high-grade metamorphic terrain of the Precambrian Chotanagpur gneissic complex situated in the eastern part of India (Shaltora area, Bankura district, West Bengal) and the hematite ore has been collected from Visakhapatnam Steel Plant (VSP), Visakhapatnam. At VSP, iron ore is received from Bailadila mines, Chattisgarh of M/s. National Mineral Development Corporation. The preliminary characterization of TMO and hematite ore (HMO) has been investigated by WDXRF, XRD and FESEM analyses. Similarly, good quality of coal (mainly coking coal) is also getting depleted fast. The basic purpose of this work is to find how lean grade coal can be utilised along with TMO for smelting to produce pig iron. Lean grade coal has been characterised by using TG/DTA, proximate and ultimate analyses. The boiler grade coal has been found to contain 28.08% of fixed carbon and 28.31% of volatile matter. TMO fines (below 75 μm) and HMO fines (below 75 μm) have been separately agglomerated with lean grade coal fines (below 75 μm) in the form of briquettes using binders like bentonite and molasses. These green briquettes are dried first in oven at 423 K for 30 min and then reduced isothermally in tube furnace over the temperature range of 1323 K, 1373 K and 1423 K for 30 min & 60 min. After reduction, the reduced briquettes are characterized by XRD and FESEM analyses. The best reduced TMO and HMO samples are taken and blended in three different weight percentage ratios of 1:4, 1:8 and 1:12 of TMO:HMO. The chemical analysis of three blended samples is carried out and degree of metallisation of iron is found to contain 89.38%, 92.12% and 93.12%, respectively. These three blended samples are briquetted using binder like bentonite and lime. Thereafter these blended briquettes are separately smelted in raising hearth furnace at 1773 K for 30 min. The pig iron formed is characterized using XRD, microscopic analysis. It can be concluded that 90% yield of pig iron can be achieved when the blend ratio of TMO:HMO is 1:4.5. This means for 90% yield, the maximum TMO that could be used in the blend is about 18%.

Keywords: Briquetting reduction, lean grade coal, smelting reduction, TMO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
94 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, Solder Joint Reliability, NUDD, connectivity issues, qualifications, characterization and control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 357
93 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control

Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak

Abstract:

With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.

Keywords: Energy-efficient buildings, Hierarchical model predictive control, Microgrid power flow optimization, Price-optimal building climate control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1476
92 Achieving Design-Stage Elemental Cost Planning Accuracy: Case Study of New Zealand

Authors: Johnson Adafin, James O. B. Rotimi, Suzanne Wilkinson, Abimbola O. Windapo

Abstract:

An aspect of client expenditure management that requires attention is the level of accuracy achievable in design-stage elemental cost planning. This has been a major concern for construction clients and practitioners in New Zealand (NZ). Pre-tender estimating inaccuracies are significantly influenced by the level of risk information available to estimators. Proper cost planning activities should ensure the production of a project’s likely construction costs (initial and final), and subsequent cost control activities should prevent unpleasant consequences of cost overruns, disputes and project abandonment. If risks were properly identified and priced at the design stage, observed variance between design-stage elemental cost plans (ECPs) and final tender sums (FTS) (initial contract sums) could be reduced. This study investigates the variations between design-stage ECPs and FTS of construction projects, with a view to identifying risk factors that are responsible for the observed variance. Data were sourced through interviews, and risk factors were identified by using thematic analysis. Access was obtained to project files from the records of study participants (consultant quantity surveyors), and document analysis was employed in complementing the responses from the interviews. Study findings revealed the discrepancies between ECPs and FTS in the region of -14% and +16%. It is opined in this study that the identified risk factors were responsible for the variability observed. The values obtained from the analysis would enable greater accuracy in the forecast of FTS by Quantity Surveyors. Further, whilst inherent risks in construction project developments are observed globally, these findings have important ramifications for construction projects by expanding existing knowledge on what is needed for reasonable budgetary performance and successful delivery of construction projects. The findings contribute significantly to the study by providing quantitative confirmation to justify the theoretical conclusions generated in the literature from around the world. This therefore adds to and consolidates existing knowledge.

Keywords: Accuracy, design-stage, elemental cost plan, final tender sum, New Zealand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748
91 Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences

Authors: Mahmoud M. S. Albattah

Abstract:

In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.

Keywords: Characteristic straight line method, dynamic height, landslides, orthometric height, systematic errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1528
90 A Comparative Study of Cardio Respiratory Efficiency between Aquatic and Track and Field Performers

Authors: Sumanta Daw, Gopal Chandra Saha

Abstract:

The present study was conducted to explore the basic pulmonary functions which may generally vary according to the bio-physical characteristics including age, height, body weight, and environment etc. of the sports performers. Regular and specific training exercises also change the characteristics of an athlete’s prowess and produce a positive effect on the physiological functioning, mostly upon cardio-pulmonary efficiency and thereby improving the body mechanism. The objective of the present study was to compare the differences in cardio-respiratory functions between aquatics and track and field performers. As cardio-respiratory functions are influenced by pulse rate and blood pressure (systolic and diastolic), so both of the factors were also taken into consideration. The component selected under cardio-respiratory functions for the present study were i) FEVI/FVC ratio (forced expiratory volume divided by forced vital capacity ratio, i.e. the number represents the percentage of lung capacity to exhale in one second) ii) FVC1 (this is the amount of air which can force out of lungs in one second) and iii) FVC (forced vital capacity is the greatest total amount of air forcefully breathe out after breathing in as deeply as possible). All the three selected components of the cardio-respiratory efficiency were measured by spirometry method. Pulse rate was determined manually. The radial artery which is located on the thumb side of our wrist was used to assess the pulse rate. Blood pressure was assessed by sphygmomanometer. All the data were taken in the resting condition. 36subjects were selected for the present study out of which 18were water polo players and rest were sprinters. The age group of the subjects was considered between 18 to 23 years. In this study the obtained data inform of digital score were treated statistically to get result and draw conclusions. The Mean and Standard Deviation (SD) were used as descriptive statistics and the significant difference between the two subject groups was assessed with the help of statistical ‘t’-test. It was found from the study that all the three components i.e. FEVI/FVC ratio (p-value 0.0148 < 0.01), FVC1 (p-value 0.0010 < 0.01) and FVC (p-value 0.0067 < 0.01) differ significantly as water polo players proved to be better in terms of cardio-respiratory functions than sprinters. Thus study clearly suggests that the exercise training as well as the medium of practice arena associated with water polo players has played an important role to determine better cardio respiratory efficiency than track and field athletes. The outcome of the present study revealed that the lung function in land-based activities may not provide much impact than that of in water activities.

Keywords: Cardio-respiratory efficiency, spirometry, water polo players, sprinters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 556
89 The Infiltration Interface Structure of Suburban Landscape Forms in Bimen Township, Anji, Zhejiang Province, China

Authors: Ke Wang, Zhu Wang

Abstract:

Coordinating and promoting urban and rural development has been a new round of institutional change in Zhejiang province since 2004. And this plan was fully implemented, which showed that the isolation between the urban and rural areas had gradually diminished. Little by little, an infiltration interface that is dynamic, flexible and interactive is formed, and this morphological structure starts to appear on the landscape form in the surrounding villages. In order to study the specific function and formation of the structure in the context of industrial revolution, Bimen village located on the interface between Anji Township, Huzhou and Yuhang District, Hangzhou is taken as the case. Anji township is in the cross area between Yangtze River delta economic circle and innovation center in Hangzhou. Awarded with ‘Chinese beautiful village’, Bimen has witnessed the growing process of infiltration in ecology, economy, technology and culture on the interface. Within the opportunity, Bimen village presents internal reformation to adapt to the energy exchange with urban areas. In the research, the reformation is to adjust the industrial structure, to upgrade the local special bamboo crafts, to release space for activities, and to establish infrastructures on the interface. The characteristic of an interface is elasticity achieved by introducing an Internet platform using ‘O2O’ agriculture method to connect cities and farmlands. There is a platform of this kind in Bimen named ‘Xiao Mei’. ‘Xiao’ in Chinese means small, ‘Mei’ means beautiful, which indicates the method to refine the landscape form. It turns out that the new agriculture mode will strengthen the interface by orienting the Third Party Platform upon the old dynamic basis and will bring new vitality for economy development in Bimen village. The research concludes opportunities and challenges generated by the evolution of the infiltration interface. It also proposes strategies for how to organically adapt to the urbanization process. Finally it demonstrates what will happen by increasing flexibility in the landscape forms of suburbs in the Bimen village.

Keywords: Bimen Village, infiltration interface, flexibility, suburban landscape form.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 922
88 Shaping of World-Class Delhi: Politics of Marginalization and Inclusion

Authors: Aparajita Santra

Abstract:

In the context of the government's vision of turning Delhi into a green, privatized and slum free city, giving it a world-class image at par with the global cities of the world, this paper investigates into the various processes and politics of things that went behind defining spaces in the city and attributing an aesthetic image to it. The paper will explore two cases that were forged primarily through the forces of one particular type of power relation. One would be to look at the modernist movement adopted by the Nehruvian government post-independence and the next case will look at special periods like Emergency and Commonwealth games. The study of these cases will help understand the ambivalence embedded in the different rationales of the Government and different powerful agencies adopted in order to build world-classness. Through the study, it will be easier to discern how city spaces were reconfigured in the name of 'good governance'. In this process, it also became important to analyze the double nature of law, both as a protector of people’s rights and as a threat to people. What was interesting to note through the study was that in the process of nation building and creating an image for the city, the government’s policies and programs were mostly aimed at the richer sections of the society and the poorer sections and people from lower income groups kept getting marginalized, subdued, and pushed further away (These marginalized people were pushed away even geographically!). The reconfiguration of city space and attributing an aesthetic character to it, led to an alteration not only in the way in which citizens perceived and engaged with these spaces, but also brought about changes in the way they envisioned their place in the city. Ironically, it was found that every attempt to build any kind of facility for the city’s elite in turn led to an inevitable removal of the marginalized sections of the society as a necessary step to achieve a clean, green and world-class city. The paper questions the claim made by the government for creating a just, equitable city and granting rights to all. An argument is put forth that in the politics of redistribution of space, the city that has been designed is meant for the aspirational middle-class and elite only, who are ideally primed to live in world-class cities. Thus, the aim is to study city spaces, urban form, the associated politics and power plays involved within and understand whether segmented cities are being built in the name of creating sensible, inclusive cities.

Keywords: Aesthetics, ambivalence, governmentality, power, world-class.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 887
87 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study

Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio D. Grieco, Emanuela Guerriero

Abstract:

Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from a real-life pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.

Keywords: Constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
86 Funding Innovative Activities in Firms: The Ownership Structure and Governance Linkage - Evidence from Mongolia

Authors: Ernest Nweke, Enkhtuya Bavuudorj

Abstract:

The harsh realities of the scandalous failure of several notable corporations in the past two decades have inextricably resulted in a surge in corporate governance studies. Nevertheless, little or no attention has been paid to corporate governance studies in Mongolian firms and much less to the comprehension of the correlation among ownership structure, corporate governance mechanisms and trend of innovative activities. Innovation is the bed rock of enterprise success. However, the funding and support for innovative activities in many firms are to a great extent determined by the incentives provided by the firm’s internal and external governance mechanisms. Mongolia is an East Asian country currently undergoing a fast-paced transition from socialist to democratic system and it is a widely held view that private ownership as against public ownership fosters innovation. Hence, following the privatization policy of Mongolian Government which has led to the transfer of the ownership of hitherto state controlled and state directed firms to private individuals and organizations, expectations are high that sufficient motivation would be provided for firm managers to engage in innovative activities. This research focuses on the relationship between ownership structure, corporate governance on one hand and the level of innovation on the hand. The paper is empirical in nature and derives data from both reliable secondary and primary sources. Secondary data for the study was in respect of ownership structure of Mongolian listed firms and innovation trend in Mongolia generally. These were analyzed using tables, charts, bars and percentages. Personal interviews and surveys were held to collect primary data. Primary data was in respect of corporate governance practices in Mongolian firms and were collected using structured questionnaire. Out of a population of three hundred and twenty (320) companies listed on the Mongolian Stock Exchange (MSE), a sample size of thirty (30) randomly selected companies was utilized for the study. Five (5) management level employees were surveyed in each selected firm giving a total of one hundred and fifty (150) respondents. Data collected were analyzed and research hypotheses tested using Chi-Square test statistic. Research results showed that corporate governance mechanisms were better and have significantly improved overtime in privately held as opposed to publicly owned firms. Consequently, the levels of innovation in privately held firms were considerably higher. It was concluded that a significant and positive relationship exists between private ownership and good corporate governance on one hand and the level of funding provided for innovative activities in Mongolian firms on the other hand.

Keywords: Corporate governance, innovation, ownership structure, stock exchange.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
85 Determinants of Never Users of Contraception – Results from Pakistan Demographic and Health Survey 2012-13

Authors: Arsalan Jabbar, Wajiha Javed, Nelofer Mehboob, Zahid Memon

Abstract:

Introduction: There are multiple social, individual and cultural factors that influence an individual’s decision to adopt family planning methods especially among non-users in patriarchal societies like Pakistan. Non-users, if targeted efficiently, can contribute significantly to country’s CPR. A research study showed that nonusers if convinced to adopt lactational amenorrhea method can shift to long term methods in future. Research shows that if non users are targeted efficiently a 59% reduction in unintended pregnancies in Saharan Africa and South-Central and South-East Asia is anticipated. Methods: We did secondary data analysis on Pakistan Demographic Heath Survey (2012-13) dataset. Use of contraception (never-use/ever-use) was the outcome variable. At univariate level Chi-square/Fisher Exact test was used to assess relationship of baseline covariates with contraception use. Then variables to be incorporated in the model were checked for multicollinearity, confounding and interaction. Then binary logistic regression (with an urban-rural stratification) was done to find relationship between contraception use and baseline demographic and social variables. Results: The multivariate analyses of the study showed that younger women (≤ 29 years)were more prone to be never users as compared to those who were >30 years and this trend was seen in urban areas (AOR 1.92, CI 1.453-2.536) as well as rural areas (AOR 1.809, CI 1.421-2.303). While looking at regional variation, women from urban Sindh (AOR 1.548, CI 1.142-2.099) and urban Balochistan (AOR 2.403, CI 1.504-3.839) had more never users as compared to other urban regions. Women in the rich wealth quintile were more never users and this was seen both in urban and rural localities (urban (AOR 1.106 CI .753-1.624); rural areas (AOR 1.162, CI .887-1.524)) even though these were not statistically significant. Women idealizing more children (>4) are more never users as compared to those idealizing less children in both urban (AOR 1.854, CI 1.275-2.697) and rural areas (AOR 2.101, CI 1.514-2.916). Women who never lost a pregnancy were more inclined to be nonusers in rural areas (AOR 1.394, CI 1.127-1.723) .Women familiar with only traditional or no method had more never users in rural areas (AOR 1.717, CI 1.127-1.723) but in urban areas it wasn’t significant. Women unaware of Lady Health Worker’s presence in their area were more never users especially in rural areas (AOR 1.276, CI 1.014-1.607). Women who did not visit any care provider were more never users (urban (AOR 11.738, CI 9.112-15.121) rural areas (AOR 7.832, CI 6.243-9.826)). Discussion/Conclusion: This study concluded that government, policy makers and private sector family planning programs should focus on the untapped pool of never users (younger women from underserved provinces, in higher wealth quintiles, who desire more children.). We need to make sure to cover catchment areas where there are less LHWs and less providers as ignorance to modern methods and never been visited by an LHW are important determinants of never use. This all is in sync with previous literate from similar developing countries.

Keywords: Contraception, Demographic and Health Survey, Family Planning, Never users.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115
84 Achieving Implementable Nature-Based Solutions While Reshaping Architectural Education: A Case Study of URBiNAT and BUILD Solutions

Authors: C. Farinea, A. Conserva, F. Demeur

Abstract:

Nature has often been something humans have fought against. However, with the changing climate and urban challenges such as air pollution and food shortages, to name but a few, it has never been more crucial to work with nature to find solutions that can help us to adapt to the current planetary situation and mitigate the challenges that we will continue to face in the future. Nature-based solutions (NBS) have been gaining ground as one strategy that can help to create more sustainable solutions for our planet and simultaneously, provide several ecosystem services. As designers, there are a lot of insights that can be extracted and gained from nature. However, nature is a complex and sometimes difficult to predict system and its implementation in cities requires a multidisciplinary knowledge. To keep up with the solutions and prepare the future generations of architects and designers with the skills to be able to implement NBS, educational systems also have to adapt with the times. Architecture is no longer solely about drawing buildings with beautiful forms. It is no longer discipline bound. With the input from different disciplines, the implementation of NBS can be significantly more successful. Transdisciplinary strategies can encourage architects and designers to think beyond their discipline, and ensure the success and realization of the NBS. The paper will demonstrate how transdisciplinary teaching methodologies, including also taking part in participatory processes with experts intended as gathering local knowledge, can be implemented with architectural master students to achieve implementable NBS. Through two projects co-funded by the European Union, strategies such as participatory co-design and transdisciplinary start-ups were implemented into seminars that focused on the development of NBS with a transdisciplinary approach. Within the “Design with Living Systems” seminar, students took part in participatory co-design strategies with experts to design solutions that will be implemented in Porto as part of a healthy corridor, and that respond to the needs of the users and site. On the other hand, within the “Design for Living Systems” seminar, the transdisciplinary start-up approach created start-ups with students of architecture, business and biology focusing on identifying a problem and designing a NBS as a product. Both seminars proved to be successful in achieving implementable NBS through strategies of transdisciplinary education and gave the students the skill sets to be able to work with nature in their future careers.

Keywords: Architectural higher education, digital fabrication, nature-based solutions, transdisciplinary approaches.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 89