Search results for: Large%20eddy%20simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2195

Search results for: Large%20eddy%20simulation

185 The Future of Hospitals: A Systematic Review in the Field of Architectural Design with a Disruptive Research and Development Approach

Authors: María Araya Léon, Ainoa Abella, Aura Murillo, Ricardo Guasch, Laura Clèries

Abstract:

This article aims to examine scientific theory framed within the term hospitals of the future from a multidisciplinary and cross-sectional perspective. To understand the connection that the various cross-sectional areas, we studied have with architectural spaces and to determine the future outlook of the works examined and how they can be classified into the categories of need/solution, evolution/revolution, collective/individual, and preventive/corrective. The changes currently taking place within the context of healthcare demonstrate how important these projects are and the need for companies to face future changes. A systematic review has been carried out focused on what will the hospitals of the future be like in relation to the elements that form part of their use, design, and architectural space experience, using the WOS database from 2016 to 2019. The large number of works about sensoring & big data and the scarce amount related to the area of materials is worth highlighting. Furthermore, no growth concerning future issues is envisaged over time. Regarding classifications, the articles we reviewed address evolutionary and collective solutions more, and in terms of preventive and corrective solutions, they were found at a similar level. Although our research focused on the future of hospitals, there is little evidence representing this approach. We also detected that, given the relevance of the research on how the built environment influences human health and well-being, these studies should be promoted within the context of healthcare. This article allows to find evidence on the future perspective from within the domain of hospital architecture, in order to create bridges between the productive sector of architecture and scientific theory. This will make it possible to detect R&D opportunities in each analyzed cross-section.

Keywords: Hospitals, trends, architectural space, disruptive approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 222
184 Evaluation of the Analytic for Hemodynamic Instability as A Prediction Tool for Early Identification of Patient Deterioration

Authors: Bryce Benson, Sooin Lee, Ashwin Belle

Abstract:

Unrecognized or delayed identification of patient deterioration is a key cause of in-hospitals adverse events. Clinicians rely on vital signs monitoring to recognize patient deterioration. However, due to ever increasing nursing workloads and the manual effort required, vital signs tend to be measured and recorded intermittently, and inconsistently causing large gaps during patient monitoring. Additionally, during deterioration, the body’s autonomic nervous system activates compensatory mechanisms causing the vital signs to be lagging indicators of underlying hemodynamic decline. This study analyzes the predictive efficacy of the Analytic for Hemodynamic Instability (AHI) system, an automated tool that was designed to help clinicians in early identification of deteriorating patients. The lead time analysis in this retrospective observational study assesses how far in advance AHI predicted deterioration prior to the start of an episode of hemodynamic instability (HI) becoming evident through vital signs? Results indicate that of the 362 episodes of HI in this study, 308 episodes (85%) were correctly predicted by the AHI system with a median lead time of 57 minutes and an average of 4 hours (240.5 minutes). Of the 54 episodes not predicted, AHI detected 45 of them while the episode of HI was ongoing. Of the 9 undetected, 5 were not detected by AHI due to either missing or noisy input ECG data during the episode of HI. In total, AHI was able to either predict or detect 98.9% of all episodes of HI in this study. These results suggest that AHI could provide an additional ‘pair of eyes’ on patients, continuously filling the monitoring gaps and consequently giving the patient care team the ability to be far more proactive in patient monitoring and adverse event management.

Keywords: Clinical deterioration prediction, decision support system, early warning system, hemodynamic status, physiologic monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 355
183 The Influence of Forest Management Histories on Dead Wood and Habitat Trees in the Old Growth Forest in Northern Iran

Authors: Kiomars Sefidi

Abstract:

Dead wood and habitat tree such as fallen logs, snags, stumps and cracks and loos bark etc. are regarded as an important ecological component of forests on which many forest dwelling species depend on presence of them within forest ecosystems. Meanwhile its relation to management history in Caspian forest has gone unreported. The aim of research was to compare the amounts of dead wood and habitat trees in the forests with historically different intensities of management, including: forests with the long term implication of management (PS), the short term implication of management (NS) which were compared with semi virgin forest (GS). The number of 405 individual dead and habitat trees were recorded and measured at 109 sampling locations. ANOVA revealed volume of dead tree in the form and decay classes significantly differ within sites and dead volume in the semi virgin forest significantly higher than managed sites. Comparing the amount of dead and habitat tree in three sites showed that, dead tree volume related with management history and significantly differ in three study sites. Meanwhile, frequency of habitat trees was significantly different within sites. The highest amount of habitat trees including cavities, cracks and loose bark and fork split trees was recorded in virgin site and lowest recorded in the sites with the long term implication of management. It can be concluded that forest management cause reduction of the amount of dead and habitat tree specially in a large size, thus managing this forest according to ecological sustainable principles require a commitment to maintaining stand structure that allow, continued generation of dead trees in a full range of size.

Keywords: Cracks trees, forest biodiversity, fork split trees, nature conservation, sustainable management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1694
182 Long-Term Durability of Roller-Compacted Concrete Pavement

Authors: Jun Hee Lee, Young Kyu Kim, Seong Jae Hong, Chamroeun Chhorn, Seung Woo Lee

Abstract:

Roller-compacted concrete pavement (RCCP), an environmental friendly pavement of which load carry capacity benefitted from both hydration and aggregate interlock from roller compacting, demonstrated a superb structural performance for a relatively small amount of water and cement content. Even though an excellent structural performance can be secured, it is required to investigate roller-compacted concrete (RCC) under environmental loading and its long-term durability under critical conditions. In order to secure long-term durability, an appropriate internal air-void structure is required for this concrete. In this study, a method for improving the long-term durability of RCCP is suggested by analyzing the internal air-void structure and corresponding durability of RCC. The method of improving the long-term durability involves measurements of air content, air voids, and air-spacing factors in RCC that experiences changes in terms of type of air-entraining agent and its usage amount. This test is conducted according to the testing criteria in ASTM C 457, 672, and KS F 2456. It was found that the freezing-thawing and scaling resistances of RCC without any chemical admixture was quite low. Interestingly, an improvement of freezing-thawing and scaling resistances was observed for RCC with appropriate the air entraining (AE) agent content; Relative dynamic elastic modulus was found to be more than 80% for those mixtures. In RCC with AE agent mixtures, large amount of air was distributed within a range of 2% to 3%, and an air void spacing factor ranging between 200 and 300 μm (close to 250 μm, recommended by PCA) was secured. The long-term durability of RCC has a direct relationship with air-void spacing factor, and thus it can only be secured by ensuring the air void spacing factor through the inclusion of the AE in the mixture.

Keywords: RCCP, durability, air spacing factor, surface scaling resistance test, freezing and thawing resistance test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1119
181 Seismic Behaviour of RC Knee Joints in Closing and Opening Actions

Authors: S. Mogili, J. S. Kuang, N. Zhang

Abstract:

Knee joints, the beam column connections found at the roof level of a moment resisting frame buildings, are inherently different from conventional interior and exterior beam column connections in the way that forces from adjoining members are transferred into joint and then resisted by the joint. A knee connection has two distinct load resisting mechanisms, each for closing and opening actions acting simultaneously under reversed cyclic loading. In spite of many distinct differences in the behaviour of shear resistance in knee joints, there are no special design provisions in the major design codes available across the world due to lack of in-depth research on the knee connections. To understand the relative importance of opening and closing actions in design, it is imperative to study knee joints under varying shear stresses, especially at higher opening-to-closing shear stress ratios. Three knee joint specimens, under different input shear stresses, were designed to produce a varying ratio of input opening to closing shear stresses. The design was carried out in such a way that the ratio of flexural strength of beams with consideration of axial forces in opening to closing actions are maintained at 0.5, 0.7, and 1.0, thereby resulting in the required variation of opening to closing joint shear stress ratios among the specimens. The behaviour of these specimens was then carefully studied in terms of closing and opening capacities, hysteretic behaviour, and envelope curves to understand the differences in joint performance based on which an attempt to suggest design guidelines for knee joints is made emphasizing the relative importance of opening and closing actions. Specimens with relatively higher opening stresses were observed to be more vulnerable under the action of seismic loading.

Keywords: Knee-joints, large-scale testing, opening and closing shear stresses, seismic performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1291
180 A Modelling Study of the Photochemical and Particulate Pollution Characteristics above a Typical Southeast Mediterranean Urban Area

Authors: Kiriaki-Maria Fameli, Vasiliki D. Assimakopoulos, Vasiliki Kotroni

Abstract:

The Greater Athens Area (GAA) faces photochemical and particulate pollution episodes as a result of the combined effects of local pollutant emissions, regional pollution transport, synoptic circulation and topographic characteristics. The area has undergone significant changes since the Athens 2004 Olympic Games because of large scale infrastructure works that lead to the shift of population to areas previously characterized as rural, the increase of the traffic fleet and the operation of highways. However, few recent modelling studies have been performed due to the lack of an accurate, updated emission inventory. The photochemical modelling system MM5/CAMx was applied in order to study the photochemical and particulate pollution characteristics above the GAA for two distinct ten-day periods in the summer of 2006 and 2010, where air pollution episodes occurred. A new updated emission inventory was used based on official data. Comparison of modeled results with measurements revealed the importance and accuracy of the new Athens emission inventory as compared to previous modeling studies. The model managed to reproduce the local meteorological conditions, the daily ozone and particulates fluctuations at different locations across the GAA. Higher ozone levels were found at suburban and rural areas as well as over the sea at the south of the basin. Concerning PM10, high concentrations were computed at the city centre and the southeastern suburbs in agreement with measured data. Source apportionment analysis showed that different sources contribute to the ozone levels, the local sources (traffic, port activities) affecting its formation.

Keywords: Photochemical modelling, urban pollution, greater Athens area, MM5/CAMx.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340
179 Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment

Authors: Amit Chhabra, Gurvinder Singh, Sandeep Singh Waraich, Bhavneet Sidhu, Gaurav Kumar

Abstract:

Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.

Keywords: SLB, DLB, Host, Algorithm and Load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
178 Identification of Promiscuous Epitopes for Cellular Immune Responses in the Major Antigenic Protein Rv3873 Encoded by Region of Difference 1 of Mycobacterium tuberculosis

Authors: Abu Salim Mustafa

Abstract:

Rv3873 is a relatively large size protein (371 amino acids in length) and its gene is located in the immunodominant genomic region of difference (RD)1 that is present in the genome of Mycobacterium tuberculosis but deleted from the genomes of all the vaccine strains of Bacillus Calmette Guerin (BCG) and most other mycobacteria. However, when tested for cellular immune responses using peripheral blood mononuclear cells from tuberculosis patients and BCG-vaccinated healthy subjects, this protein was found to be a major stimulator of cell mediated immune responses in both groups of subjects. In order to further identify the sequence of immunodominant epitopes and explore their Human Leukocyte Antigen (HLA)-restriction for epitope recognition, 24 peptides (25-mers overlapping with the neighboring peptides by 10 residues) covering the sequence of Rv3873 were synthesized chemically using fluorenylmethyloxycarbonyl chemistry and tested in cell mediated immune responses. The results of these experiments helped in the identification of an immunodominant peptide P9 that was recognized by people expressing varying HLA-DR types. Furthermore, it was also predicted to be a promiscuous binder with multiple epitopes for binding to HLA-DR, HLA-DP and HLA-DQ alleles of HLA-class II molecules that present antigens to T helper cells, and to HLA-class I molecules that present antigens to T cytotoxic cells. In addition, the evaluation of peptide P9 using an immunogenicity predictor server yielded a high score (0.94), which indicated a greater probability of this peptide to elicit a protective cellular immune response. In conclusion, P9, a peptide with multiple epitopes and ability to bind several HLA class I and class II molecules for presentation to cells of the cellular immune response, may be useful as a peptide-based vaccine against tuberculosis.

Keywords: Mycobacterium tuberculosis, Rv3873, peptides, vaccine

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 791
177 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks

Authors: Ashkan Ebadi, Adam Krzyzak

Abstract:

Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.

Keywords: Tourism, hotel recommender system, hybrid, implicit features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1860
176 A Compact Via-less Ultra-Wideband Microstrip Filter by Utilizing Open-Circuit Quarter Wavelength Stubs

Authors: Muhammad Yasir Wadood, Fatemeh Babaeian

Abstract:

By developing ultra-wideband (UWB) systems, there is a high demand for UWB filters with low insertion loss, wide bandwidth, and having a planar structure which is compatible with other components of the UWB system. A microstrip interdigital filter is a great option for designing UWB filters. However, the presence of via holes in this structure creates difficulties in the fabrication procedure of the filter. Especially in the higher frequency band, any misalignment of the drilled via hole with the Microstrip stubs causes large errors in the measurement results compared to the desired results. Moreover, in this case (high-frequency designs), the line width of the stubs are very narrow, so highly precise small via holes are required to be implemented, which increases the cost of fabrication significantly. Also, in this case, there is a risk of having fabrication errors. To combat this issue, in this paper, a via-less UWB microstrip filter is proposed which is designed based on a modification of a conventional inter-digital bandpass filter. The novel approaches in this filter design are 1) replacement of each via hole with a quarter-wavelength open circuit stub to avoid the complexity of manufacturing, 2) using a bend structure to reduce the unwanted coupling effects and 3) minimising the size. Using the proposed structure, a UWB filter operating in the frequency band of 3.9-6.6 GHz (1-dB bandwidth) is designed and fabricated. The promising results of the simulation and measurement are presented in this paper. The selected substrate for these designs was Rogers RO4003 with a thickness of 20 mils. This is a common substrate in most of the industrial projects. The compact size of the proposed filter is highly beneficial for applications which require a very miniature size of hardware.

Keywords: Band-pass filters, inter-digital filter, microstrip, via-less.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 784
175 Factors Affecting M-Government Deployment and Adoption

Authors: Saif Obaid Alkaabi, Nabil Ayad

Abstract:

Governments constantly seek to offer faster, more secure, efficient and effective services for their citizens. Recent changes and developments to communication services and technologies, mainly due the Internet, have led to immense improvements in the way governments of advanced countries carry out their interior operations Therefore, advances in e-government services have been broadly adopted and used in various developed countries, as well as being adapted to developing countries. The implementation of advances depends on the utilization of the most innovative structures of data techniques, mainly in web dependent applications, to enhance the main functions of governments. These functions, in turn, have spread to mobile and wireless techniques, generating a new advanced direction called m-government. This paper discusses a selection of available m-government applications and several business modules and frameworks in various fields. Practically, the m-government models, techniques and methods have become the improved version of e-government. M-government offers the potential for applications which will work better, providing citizens with services utilizing mobile communication and data models incorporating several government entities. Developing countries can benefit greatly from this innovation due to the fact that a large percentage of their population is young and can adapt to new technology and to the fact that mobile computing devices are more affordable. The use of models of mobile transactions encourages effective participation through the use of mobile portals by businesses, various organizations, and individual citizens. Although the application of m-government has great potential, it does have major limitations. The limitations include: the implementation of wireless networks and relative communications, the encouragement of mobile diffusion, the administration of complicated tasks concerning the protection of security (including the ability to offer privacy for information), and the management of the legal issues concerning mobile applications and the utilization of services.

Keywords: E-government, m-government, system dependability, system security, trust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
174 Morphemic Analysis Awareness: A Boon or Bane on ESL Students’ Vocabulary Learning Strategy

Authors: Chandrakala Varatharajoo, Adelina Binti Asmawi, Nabeel Abdallah Mohammad Abedalaziz

Abstract:

This study investigated the impact of inflectional and derivational morphemic analysis awareness on ESL secondary school students’ vocabulary learning strategy. The quasi-experimental study was conducted with 106 low proficiency secondary school students in two experimental groups (inflectional and derivational) and one control group. The students’ vocabulary acquisition was assessed through two measures: Morphemic Analysis Test and Vocabulary- Morphemic Test in the pretest and posttest before and after an intervention programme. Results of ANCOVA revealed that both the experimental groups achieved a significant score in Morphemic Analysis Test and Vocabulary-Morphemic Test. However, the inflectional group obtained a fairly higher score than the derivational group. Thus, the results indicated that ESL low proficiency secondary school students performed better on inflectional morphemic awareness as compared to derivatives. The results also showed that the awareness of inflectional morphology contributed more on the vocabulary acquisition. Importantly, learning inflectional morphology can help ESL low proficiency secondary school students to develop both morphemic awareness and vocabulary gain. Theoretically, these findings show that not all morphemes are equally useful to students for their language development. Practically, these findings indicate that morphological instruction should at least be included in remediation and instructional efforts with struggling learners across all grade levels, allowing them to focus on meaning within the word before they attempt the text in large for better comprehension. Also, by methodologically, by conducting individualized intervention and assessment this study provided fresh empirical evidence to support the existing literature on morphemic analysis awareness and vocabulary learning strategy. Thus, a major pedagogical implication of the study is that morphemic analysis awareness strategy is a definite boon for ESL secondary school students in learning English vocabulary.

Keywords: ESL, instruction, morphemic analysis, vocabulary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2850
173 High Securing Cover-File of Hidden Data Using Statistical Technique and AES Encryption Algorithm

Authors: A. A. Zaidan, Anas Majeed, B. B. Zaidan

Abstract:

Nowadays, the rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threatens. It-s a big security and privacy issue with the large flood of information and the development of the digital format, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. Nowadays protection system classified with more specific as hiding information, encryption information, and combination between hiding and encryption to increase information security, the strength of the information hiding science is due to the non-existence of standard algorithms to be used in hiding secret messages. Also there is randomness in hiding methods such as combining several media (covers) with different methods to pass a secret message. In addition, there are no formal methods to be followed to discover the hidden data. For this reason, the task of this research becomes difficult. In this paper, a new system of information hiding is presented. The proposed system aim to hidden information (data file) in any execution file (EXE) and to detect the hidden file and we will see implementation of steganography system which embeds information in an execution file. (EXE) files have been investigated. The system tries to find a solution to the size of the cover file and making it undetectable by anti-virus software. The system includes two main functions; first is the hiding of the information in a Portable Executable File (EXE), through the execution of four process (specify the cover file, specify the information file, encryption of the information, and hiding the information) and the second function is the extraction of the hiding information through three process (specify the steno file, extract the information, and decryption of the information). The system has achieved the main goals, such as make the relation of the size of the cover file and the size of information independent and the result file does not make any conflict with anti-virus software.

Keywords: Cryptography, Steganography, Portable ExecutableFile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
172 Prioritization Assessment of Housing Development Risk Factors: A Fuzzy Hierarchical Process-Based Approach

Authors: Yusuf Garba Baba

Abstract:

The construction industry and housing subsector are fraught with risks that have the potential of negatively impacting on the achievement of project objectives. The success or otherwise of most construction projects depends to large extent on how well these risks have been managed. The recent paradigm shift by the subsector to use of formal risk management approach in contrast to hitherto developed rules of thumb means that risks must not only be identified but also properly assessed and responded to in a systematic manner. The study focused on identifying risks associated with housing development projects and prioritisation assessment of the identified risks in order to provide basis for informed decision. The study used a three-step identification framework: review of literature for similar projects, expert consultation and questionnaire based survey to identify potential risk factors. Delphi survey method was employed in carrying out the relative prioritization assessment of the risks factors using computer-based Analytical Hierarchical Process (AHP) software. The results show that 19 out of the 50 risks significantly impact on housing development projects. The study concludes that although significant numbers of risk factors have been identified as having relevance and impacting to housing construction projects, economic risk group and, in particular, ‘changes in demand for houses’ is prioritised by most developers as posing a threat to the achievement of their housing development objectives. Unless these risks are carefully managed, their effects will continue to impede success in these projects. The study recommends the adoption and use of the combination of multi-technique identification framework and AHP prioritization assessment methodology as a suitable model for the assessment of risks in housing development projects.

Keywords: Risk identification, risk assessment, analytical hierarchical process, multi-criteria decision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 686
171 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow

Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen

Abstract:

Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.

Keywords: Flux limiters, MILES, OpenFOAM, turbulence structures, TVD schemes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1077
170 Preparation, Characterisation, and Measurement of the in vitro Cytotoxicity of Mesoporous Silica Nanoparticles Loaded with Cytotoxic Pt(II) Oxadiazoline Complexes

Authors: G. Wagner, R. Herrmann

Abstract:

Cytotoxic platinum compounds play a major role in the chemotherapy of a large number of human cancers. However, due to the severe side effects for the patient and other problems associated with their use, there is a need for the development of more efficient drugs and new methods for their selective delivery to the tumours. One way to achieve the latter could be in the use of nanoparticular substrates that can adsorb or chemically bind the drug. In the cell, the drug is supposed to be slowly released, either by physical desorption or by dissolution of the particle framework. Ideally, the cytotoxic properties of the platinum drug unfold only then, in the cancer cell and over a longer period of time due to the gradual release. In this paper, we report on our first steps in this direction. The binding properties of a series of cytotoxic Pt(II) oxadiazoline compounds to mesoporous silica particles has been studied by NMR and UV/vis spectroscopy. High loadings were achieved when the Pt(II) compound was relatively polar, and has been dissolved in a relatively nonpolar solvent before the silica was added. Typically, 6-10 hours were required for complete equilibration, suggesting the adsorption did not only occur to the outer surface but also to the interior of the pores. The untreated and Pt(II) loaded particles were characterised by C, H, N combustion analysis, BET/BJH nitrogen sorption, electron microscopy (REM and TEM) and EDX. With the latter methods we were able to demonstrate the homogenous distribution of the Pt(II) compound on and in the silica particles, and no Pt(II) bulk precipitate had formed. The in vitro cytotoxicity in a human cancer cell line (HeLa) has been determined for one of the new platinum compounds adsorbed to mesoporous silica particles of different size, and compared with the corresponding compound in solution. The IC50 data are similar in all cases, suggesting that the release of the Pt(II) compound was relatively fast and possibly occurred before the particles reached the cells. Overall, the platinum drug is chemically stable on silica and retained its activity upon prolonged storage.

Keywords: Cytotoxicity, mesoporous silica, nanoparticles platinum compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1609
169 Introductory Design Optimisation of a Machine Tool using a Virtual Machine Concept

Authors: Johan Wall, Johan Fredin, Anders Jönsson, Göran Broman

Abstract:

Designing modern machine tools is a complex task. A simulation tool to aid the design work, a virtual machine, has therefore been developed in earlier work. The virtual machine considers the interaction between the mechanics of the machine (including structural flexibility) and the control system. This paper exemplifies the usefulness of the virtual machine as a tool for product development. An optimisation study is conducted aiming at improving the existing design of a machine tool regarding weight and manufacturing accuracy at maintained manufacturing speed. The problem can be categorised as constrained multidisciplinary multiobjective multivariable optimisation. Parameters of the control and geometric quantities of the machine are used as design variables. This results in a mix of continuous and discrete variables and an optimisation approach using a genetic algorithm is therefore deployed. The accuracy objective is evaluated according to international standards. The complete systems model shows nondeterministic behaviour. A strategy to handle this based on statistical analysis is suggested. The weight of the main moving parts is reduced by more than 30 per cent and the manufacturing accuracy is improvement by more than 60 per cent compared to the original design, with no reduction in manufacturing speed. It is also shown that interaction effects exist between the mechanics and the control, i.e. this improvement would most likely not been possible with a conventional sequential design approach within the same time, cost and general resource frame. This indicates the potential of the virtual machine concept for contributing to improved efficiency of both complex products and the development process for such products. Companies incorporating such advanced simulation tools in their product development could thus improve its own competitiveness as well as contribute to improved resource efficiency of society at large.

Keywords: Machine tools, Mechatronics, Non-deterministic, Optimisation, Product development, Virtual machine

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1920
168 Influence of Dynamic Loads in the Structural Integrity of Underground Rooms

Authors: M. Inmaculada Alvarez-Fernández, Celestino González-Nicieza, M. Belén Prendes-Gero, Fernando López-Gayarre

Abstract:

Among many factors affecting the stability of mining excavations, rock-bursts and tremors play a special role. These dynamic loads occur practically always and have different sources of generation. The most important of them is the commonly used mining technique, which disintegrates a certain area of the rock mass not only in the area of the planned mining, but also creates waves that significantly exceed this area affecting the structural elements. In this work it is analysed the consequences of dynamic loads over the structural elements in an underground room and pillar mine to avoid roof instabilities. With this end, dynamic loads were evaluated through in situ and laboratory tests and simulated with numerical modelling. Initially, the geotechnical characterization of all materials was carried out by mean of large-scale tests. Then, drill holes were done on the roof of the mine and were monitored to determine possible discontinuities in it. Three seismic stations and a triaxial accelerometer were employed to measure the vibrations from blasting tests, establish the dynamic behaviour of roof and pillars and develop the transmission laws. At last, computer simulations by FLAC3D software were done to check the effect of vibrations on the stability of the roofs. The study shows that in-situ tests have a greater reliability than laboratory samples because of eliminating the effect of heterogeneities, that the pillars work decreasing the amplitude of the vibration around them, and that the tensile strength of a beam and depending on its span is overcome with waves in phase and delayed. The obtained transmission law allows designing a blasting which guarantees safety and prevents the risk of future failures.

Keywords: Dynamic modelling, long term instability risks, room and pillar, seismic collapse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 441
167 'Performance-Based' Seismic Methodology and Its Application in Seismic Design of Reinforced Concrete Structures

Authors: Jelena R. Pejović, Nina N. Serdar

Abstract:

This paper presents an analysis of the “Performance-Based” seismic design method, in order to overcome the perceived disadvantages and limitations of the existing seismic design approach based on force, in engineering practice. Bearing in mind, the specificity of the earthquake as a load and the fact that the seismic resistance of the structures solely depends on its behaviour in the nonlinear field, traditional seismic design approach based on force and linear analysis is not adequate. “Performance-Based” seismic design method is based on nonlinear analysis and can be used in everyday engineering practice. This paper presents the application of this method to eight-story high reinforced concrete building with combined structural system (reinforced concrete frame structural system in one direction and reinforced concrete ductile wall system in other direction). The nonlinear time-history analysis is performed on the spatial model of the structure using program Perform 3D, where the structure is exposed to forty real earthquake records. For considered building, large number of results were obtained. It was concluded that using this method we could, with a high degree of reliability, evaluate structural behavior under earthquake. It is obtained significant differences in the response of structures to various earthquake records. Also analysis showed that frame structural system had not performed well at the effect of earthquake records on soil like sand and gravel, while a ductile wall system had a satisfactory behavior on different types of soils.

Keywords: Ductile wall, frame system, nonlinear time-history analysis, performance-based methodology, RC building.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1462
166 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers’ equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 466
165 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: Time-series, features engineering methods for forecasting, energy demand forecasting, Azure machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1244
164 Jatropha curcas L. Oil Selectivity in Froth Flotation

Authors: André C. Silva, Izabela L. A. Moraes, Elenice M. S. Silva, Carlos M. Silva Filho

Abstract:

In Brazil, most soils are acidic and low in essential nutrients required for the growth and development of plants, making fertilizers essential for agriculture. As the biggest producer of soy in the world and a major producer of coffee, sugar cane and citrus fruits, Brazil is a large consumer of phosphate. Brazilian’s phosphate ores are predominantly from igneous rocks showing a complex mineralogy, associated with carbonites and oxides, typically iron, silicon and barium. The adopted industrial concentration circuit for this type of ore is a mix between magnetic separation (both low and high field) to remove the magnetic fraction and a froth flotation circuit composed by a reverse flotation of apatite (barite’s flotation) followed by direct flotation circuit (rougher, cleaner and scavenger circuit). Since the 70’s fatty acids obtained from vegetable oils are widely used as lower-cost collectors in apatite froth flotation. This is a very effective approach to the apatite family of minerals, being that this type of collector is both selective and efficient (high recovery). This paper presents Jatropha curcas L. oil (JCO) as a renewable and sustainable source of fatty acids with high selectivity in froth flotation of apatite. JCO is considerably rich in fatty acids such as linoleic, oleic and palmitic acid. The experimental campaign involved 216 tests using a modified Hallimond tube and two different minerals (apatite and quartz). In order to be used as a collector, the oil was saponified. The results found were compared with the synthetic collector, Fotigam 5806 produced by Clariant, which is composed mainly by soy oil. JCO showed the highest selectivity for apatite flotation with cold saponification at pH 8 and concentration of 2.5 mg/L. In this case, the mineral recovery was around 95%.

Keywords: Froth flotation, Jatropha curcas L., microflotation, selectivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106
163 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc

Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez

Abstract:

The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.

Keywords: BLER, LTE, Network, Qualipoc, SNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 493
162 Effect of Environmental Parameters on the Water Solubility of the Polycyclic Aromatic Hydrocarbons and Derivatives Using Taguchi Experimental Design Methodology

Authors: P. Pimsee, C. Sablayrolles, P. de Caro, J. Guyomarch, N. Lesage, M. Montréjaud-Vignoles

Abstract:

The MIGR’HYCAR research project was initiated to provide decisional tools for risks connected to oil spill drifts in continental waters. These tools aim to serve in the decision-making process once oil spill pollution occurs and/or as reference tools to study scenarios of potential impacts of pollutions on a given site. This paper focuses on the study of the distribution of polycyclic aromatic hydrocarbons (PAHs) and derivatives from oil spill in water as function of environmental parameters. Eight petroleum oils covering a representative range of commercially available products were tested. 41 polycyclic aromatic hydrocarbons (PAHs) and derivates, among them 16 EPA priority pollutants were studied by dynamic tests at laboratory scale. The chemical profile of the water soluble fraction was different from the parent oil profile due to the various water solubility of oil components. Semi-volatile compounds (naphtalenes) constitute the major part of the water soluble fraction. A large variation in composition of the water soluble fraction was highlighted depending on oil type. Moreover, four environmental parameters (temperature, suspended solid quantity, salinity and oil: water surface ratio) were investigated with the Taguchi experimental design methodology. The results showed that oils are divided into three groups: the solubility of Domestic fuel and Jet A1 presented a high sensitivity to parameters studied, meaning they must be taken into account. For Gasoline (SP95-E10) and Diesel fuel, a medium sensitivity to parameters was observed. In fact, the four others oils have shown low sensitivity to parameters studied. Finally, three parameters were found to be significant towards the water soluble fraction.

Keywords: Monitoring, PAHs, SBSE, water soluble fraction, Taguchi experimental design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940
161 Identification of Complex Sense-antisense Gene's Module on 17q11.2 Associated with Breast Cancer Aggressiveness and Patient's Survival

Authors: O. Grinchuk, E. Motakis, V. Kuznetsov

Abstract:

Sense-antisense gene pair (SAGP) is a pair of two oppositely transcribed genes sharing a common region on a chromosome. In the mammalian genomes, SAGPs can be organized in more complex sense-antisense gene architectures (CSAGA) in which at least one gene could share loci with two or more antisense partners. Many dozens of CSAGAs can be found in the human genome. However, CSAGAs have not been systematically identified and characterized in context of their role in human diseases including cancers. In this work we characterize the structural-functional properties of a cluster of 5 genes –TMEM97, IFT20, TNFAIP1, POLDIP2 and TMEM199, termed TNFAIP1 / POLDIP2 module. This cluster is organized as CSAGA in cytoband 17q11.2. Affymetrix U133A&B expression data of two large cohorts (410 atients, in total) of breast cancer patients and patient survival data were used. For the both studied cohorts, we demonstrate (i) strong and reproducible transcriptional co-regulatory patterns of genes of TNFAIP1/POLDIP2 module in breast cancer cell subtypes and (ii) significant associations of TNFAIP1/POLDIP2 CSAGA with amplification of the CSAGA region in breast cancer, (ii) cancer aggressiveness (e.g. genetic grades) and (iv) disease free patient-s survival. Moreover, gene pairs of this module demonstrate strong synergetic effect in the prognosis of time of breast cancer relapse. We suggest that TNFAIP1/ POLDIP2 cluster can be considered as a novel type of structural-functional gene modules in the human genome.

Keywords: Sense-antisense gene pair, complex genome architecture, TMEM97, IFT20, TNFAIP1, POLDIP2, TMEM199, 17q11.2, breast cancer, transcription regulation, survival analysis, prognosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632
160 Probiotic Potential and Antimicrobial Activity of Enterococcus faecium Isolated from Chicken Caecal and Fecal Samples

Authors: Salma H. Abu Hafsa, A. Mendonca, B. Brehm-Stecher, A. A. Hassan, S. A. Ibrahim

Abstract:

Enterococci are important inhabitants of the animal intestine and are widely used in probiotic products. A probiotic strain is expected to possess several desirable properties in order to exert beneficial effects. Therefore, the objective of this study was to isolate, characterize and identify Enterococcus sp. from chicken cecal and fecal samples to determine potential probiotic properties. Enterococci were isolated from chicken ceca and feces of thirty three clinically healthy chickens from a local farm. In vitro studies were performed to assess antibacterial activity of the isolated LAB (using agar well diffusion and cell free supernatant broth technique against Salmonella enterica serotype Enteritidis), survival in acidic conditions, resistance to bile salts, and their survival during simulated gastric juice conditions at pH 2.5. Isolates were identified by biochemical carbohydrate fermentation patterns using an API 50 CHL kit and API ZYM kits and by sequenced 16S rDNA. An isolate belonging to E. faecium species exhibited inhibitory effect against S. enteritidis. This isolate producing a clear zone as large as 10.30 mm or greater and was able to grow in the coculture medium and at the same time, inhibited the growth S. enteritidis. In addition, E. faecium exhibited significant resistance under highly acidic conditions at pH 2.5 for 8 h and survived well in bile salt at 0.2% for 24 h and showing ability to survive in the presence of simulated gastric juice at pH 2.5. Based on these results, E. faecium isolate fulfills some of the criteria to be considered as a probiotic strain and therefore, could be used as a feed additive with good potential for controlling S. Enteritidis in chickens. However, in vivo studies are needed to determine the safety of the strain.

Keywords: Acid tolerance, antimicrobial activity, Enterococcus faecium, probiotic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2857
159 Spectral Mixture Model Applied to Cannabis Parcel Determination

Authors: Levent Basayigit, Sinan Demir, Yusuf Ucar, Burhan Kara

Abstract:

Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.

Keywords: Gaussian mixture discriminant analysis, spectral mixture model, World View-2, land parcels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 750
158 Adverse Curing Conditions and Performance of Concrete: Bangladesh Perspective

Authors: T. Manzur

Abstract:

Concrete is the predominant construction material in Bangladesh. In large projects, stringent quality control procedures are usually followed under the supervision of experienced engineers and skilled labors. However, in the case of small projects and particularly at distant locations from major cities, proper quality control is often an issue. It has been found from experience that such quality related issues mainly arise from inappropriate proportioning of concrete mixes and improper curing conditions. In most cases external curing method is followed which requires supply of adequate quantity of water along with proper protection against evaporation. Often these conditions are found missing in the general construction sites and eventually lead to production of weaker concrete both in terms of strength and durability. In this study, an attempt has been made to investigate the performance of general concreting works of the country when subjected to several adverse curing conditions that are quite common in various small to medium construction sites. A total of six different types of adverse curing conditions were simulated in the laboratory and samples were kept under those conditions for several days. A set of samples was also submerged in normal curing condition having proper supply of curing water. Performance of concrete was evaluated in terms of compressive strength, tensile strength, chloride permeability and drying shrinkage. About 37% and 25% reduction in 28-day compressive and tensile strength were observed respectively, for samples subjected to most adverse curing condition as compared to the samples under normal curing conditions. Normal curing concrete exhibited moderate permeability (close to low permeability) whereas concrete under adverse curing conditions showed very high permeability values. Similar results were also obtained for shrinkage tests. This study, thus, will assist concerned engineers and supervisors to understand the importance of quality assurance during the curing period of concrete.

Keywords: Adverse, concrete, curing, compressive strength, drying shrinkage, permeability, tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018
157 An Evaluation on the Effectiveness of a 3D Printed Composite Compression Mold

Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg

Abstract:

The applications of composite materials within the aviation industry has been increasing at a rapid pace.  However, the growing applications of composite materials have also led to growing demand for more tooling to support its manufacturing processes. Tooling and tooling maintenance represents a large portion of the composite manufacturing process and cost. Therefore, the industry’s adaptability to new techniques for fabricating high quality tools quickly and inexpensively will play a crucial role in composite material’s growing popularity in the aviation industry. One popular tool fabrication technique currently being developed involves additive manufacturing such as 3D printing. Although additive manufacturing and 3D printing are not entirely new concepts, the technique has been gaining popularity due to its ability to quickly fabricate components, maintain low material waste, and low cost. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite compression mold. A 3D printed composite compression mold was fabricated by 3D scanning a steel valve cover of an aircraft reciprocating engine. The 3D printed composite compression mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The 3D printed composite compression mold was evaluated for its performance, durability, and dimensional stability while the fabricated carbon fiber valve covers were evaluated for its accuracy and quality. The results and data gathered from this study will determine the effectiveness of the 3D printed composite compression mold in a mass production environment and provide valuable information for future understanding, improvements, and design considerations of 3D printed composite molds.

Keywords: Additive manufacturing, carbon fiber, composite tooling, molds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650
156 Vitamin Content of Swordfish (Xhiphias gladius) Affected by Salting and Frying

Authors: L. Piñeiro, N. Cobas, L. Gómez-Limia, S. Martínez, I. Franco

Abstract:

The swordfish (Xiphias gladius) is a large oceanic fish of high commercial value, which is widely distributed in waters of the world’s oceans. They are considered to be an important source of high quality proteins, vitamins and essential fatty acids, although only half of the population follows the recommendation of nutritionists to consume fish at least twice a week. Swordfish is consumed worldwide because of its low fat content and high protein content. It is generally sold as fresh, frozen, and as pieces or slices. The aim of this study was to evaluate the effect of salting and frying on the composition of the water-soluble vitamins (B2, B3, B9 and B12) and fat-soluble vitamins (A, D, and E) of swordfish. Three loins of swordfish from Pacific Ocean were analyzed. All the fishes had a weight between 50 and 70 kg and were transported to the laboratory frozen (-18 ºC). Before the processing, they were defrosted at 4 ºC. Each loin was sliced and salted in brine. After cleaning the slices, they were divided into portions (10×2 cm) and fried in olive oil. The identification and quantification of vitamins were carried out by high-performance liquid chromatography (HPLC), using methanol and 0.010% trifluoroacetic acid as mobile phases at a flow-rate of 0.7 mL min-1. The UV-Vis detector was used for the detection of the water- and fat-soluble vitamins (A and D), as well as the fluorescence detector for the detection of the vitamin E. During salting, water and fat-soluble vitamin contents remained constant, observing an evident decrease in the values of vitamin B2. The diffusion of salt into the interior of the pieces and the loss of constitution water that occur during this stage would be related to this significant decrease. In general, after frying water-soluble and fat-soluble vitamins showed a great thermolability with high percentages of retention with values among 50–100%. Vitamin B3 is the one that exhibited higher percentages of retention with values close to 100%. However, vitamin B9 presented the highest losses with a percentage of retention of less than 20%.

Keywords: Frying, HPLC, salting, swordfish, vitamins.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 851