Search results for: Evolutionary Optimization Techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4177

Search results for: Evolutionary Optimization Techniques

217 Q-Map: Clinical Concept Mining from Clinical Documents

Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala

Abstract:

Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.

Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736
216 Assessment of Urban Heat Island through Remote Sensing in Nagpur Urban Area Using Landsat 7 ETM+ Satellite Images

Authors: Meenal Surawar, Rajashree Kotharkar

Abstract:

Urban Heat Island (UHI) is found more pronounced as a prominent urban environmental concern in developing cities. To study the UHI effect in the Indian context, the Nagpur urban area has been explored in this paper using Landsat 7 ETM+ satellite images through Remote Sensing and GIS techniques. This paper intends to study the effect of LU/LC pattern on daytime Land Surface Temperature (LST) variation, contributing UHI formation within the Nagpur Urban area. Supervised LU/LC area classification was carried to study urban Change detection using ENVI 5. Change detection has been studied by carrying Normalized Difference Vegetation Index (NDVI) to understand the proportion of vegetative cover with respect to built-up ratio. Detection of spectral radiance from the thermal band of satellite images was processed to calibrate LST. Specific representative areas on the basis of urban built-up and vegetation classification were selected for observation of point LST. The entire Nagpur urban area shows that, as building density increases with decrease in vegetation cover, LST increases, thereby causing the UHI effect. UHI intensity has gradually increased by 0.7°C from 2000 to 2006; however, a drastic increase has been observed with difference of 1.8°C during the period 2006 to 2013. Within the Nagpur urban area, the UHI effect was formed due to increase in building density and decrease in vegetative cover.

Keywords: Land use, land cover, land surface temperature, remote sensing, urban heat island.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2553
215 Deployment of Beyond 4G Wireless Communication Networks with Carrier Aggregation

Authors: Bahram Khan, Anderson Rocha Ramos, Rui R. Paulo, Fernando J. Velez

Abstract:

With the growing demand for a new blend of applications, the users dependency on the internet is increasing day by day. Mobile internet users are giving more attention to their own experiences, especially in terms of communication reliability, high data rates and service stability on move. This increase in the demand is causing saturation of existing radio frequency bands. To address these challenges, researchers are investigating the best approaches, Carrier Aggregation (CA) is one of the newest innovations, which seems to fulfill the demands of the future spectrum, also CA is one the most important feature for Long Term Evolution - Advanced (LTE-Advanced). For this purpose to get the upcoming International Mobile Telecommunication Advanced (IMT-Advanced) mobile requirements (1 Gb/s peak data rate), the CA scheme is presented by 3GPP, which would sustain a high data rate using widespread frequency bandwidth up to 100 MHz. Technical issues such as aggregation structure, its implementations, deployment scenarios, control signal techniques, and challenges for CA technique in LTE-Advanced, with consideration of backward compatibility, are highlighted in this paper. Also, performance evaluation in macro-cellular scenarios through a simulation approach is presented, which shows the benefits of applying CA, low-complexity multi-band schedulers in service quality, system capacity enhancement and concluded that enhanced multi-band scheduler is less complex than the general multi-band scheduler, which performs better for a cell radius longer than 1800 m (and a PLR threshold of 2%).

Keywords: Component carrier, carrier aggregation, LTE-Advanced, scheduling, spectrum management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 511
214 Investigating the Effectiveness of a 3D Printed Composite Mold

Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg

Abstract:

In composite manufacturing, the fabrication of tooling and tooling maintenance contributes to a large portion of the total cost. However, as the applications of composite materials continue to increase, there is also a growing demand for more tooling. The demand for more tooling places heavy emphasis on the industry’s ability to fabricate high quality tools while maintaining the tool’s cost effectiveness. One of the popular techniques of tool fabrication currently being developed utilizes additive manufacturing technology known as 3D printing. The popularity of 3D printing is due to 3D printing’s ability to maintain low material waste, low cost, and quick fabrication time. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite mold. A steel valve cover from an aircraft reciprocating engine was modeled utilizing 3D scanning and computer-aided design (CAD) to create a 3D printed composite mold. The mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The carbon fiber valve covers were evaluated for dimensional accuracy and quality while the 3D printed composite mold was evaluated for durability and dimensional stability. The data collected from this study provided valuable information in the understanding of 3D printed composite molds, potential improvements for the molds, and considerations for future tooling design.

Keywords: Additive manufacturing, carbon fiber, composite tooling, molds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 861
213 MinRoot and CMesh: Interconnection Architectures for Network-on-Chip Systems

Authors: Mohammad Ali Jabraeil Jamali, Ahmad Khademzadeh

Abstract:

The success of an electronic system in a System-on- Chip is highly dependent on the efficiency of its interconnection network, which is constructed from routers and channels (the routers move data across the channels between nodes). Since neither classical bus based nor point to point architectures can provide scalable solutions and satisfy the tight power and performance requirements of future applications, the Network-on-Chip (NoC) approach has recently been proposed as a promising solution. Indeed, in contrast to the traditional solutions, the NoC approach can provide large bandwidth with moderate area overhead. The selected topology of the components interconnects plays prime rule in the performance of NoC architecture as well as routing and switching techniques that can be used. In this paper, we present two generic NoC architectures that can be customized to the specific communication needs of an application in order to reduce the area with minimal degradation of the latency of the system. An experimental study is performed to compare these structures with basic NoC topologies represented by 2D mesh, Butterfly-Fat Tree (BFT) and SPIN. It is shown that Cluster mesh (CMesh) and MinRoot schemes achieves significant improvements in network latency and energy consumption with only negligible area overhead and complexity over existing architectures. In fact, in the case of basic NoC topologies, CMesh and MinRoot schemes provides substantial savings in area as well, because they requires fewer routers. The simulation results show that CMesh and MinRoot networks outperforms MESH, BFT and SPIN in main performance metrics.

Keywords: MinRoot, CMesh, NoC, Topology, Performance Evaluation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2087
212 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
211 Comparative Study Using Weka for Red Blood Cells Classification

Authors: Jameela Ali Alkrimi, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy

Abstract:

Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithms tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital - Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.

Keywords: K-Nearest Neighbors, Neural Network, Radial Basis Function, Red blood cells, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2943
210 Applying p-Balanced Energy Technique to Solve Liouville-Type Problems in Calculus

Authors: Lina Wu, Ye Li, Jia Liu

Abstract:

We are interested in solving Liouville-type problems to explore constancy properties for maps or differential forms on Riemannian manifolds. Geometric structures on manifolds, the existence of constancy properties for maps or differential forms, and energy growth for maps or differential forms are intertwined. In this article, we concentrate on discovery of solutions to Liouville-type problems where manifolds are Euclidean spaces (i.e. flat Riemannian manifolds) and maps become real-valued functions. Liouville-type results of vanishing properties for functions are obtained. The original work in our research findings is to extend the q-energy for a function from finite in Lq space to infinite in non-Lq space by applying p-balanced technique where q = p = 2. Calculation skills such as Hölder's Inequality and Tests for Series have been used to evaluate limits and integrations for function energy. Calculation ideas and computational techniques for solving Liouville-type problems shown in this article, which are utilized in Euclidean spaces, can be universalized as a successful algorithm, which works for both maps and differential forms on Riemannian manifolds. This innovative algorithm has a far-reaching impact on research work of solving Liouville-type problems in the general settings involved with infinite energy. The p-balanced technique in this algorithm provides a clue to success on the road of q-energy extension from finite to infinite.

Keywords: Differential Forms, Hölder Inequality, Liouville-type problems, p-balanced growth, p-harmonic maps, q-energy growth, tests for series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 796
209 The Impact of Regulatory Changes on the Development of Mobile Medical Apps

Authors: M. McHugh, D. Lillis

Abstract:

Mobile applications are being used to perform a wide variety of tasks in day-to-day life, ranging from checking email to controlling your home heating. Application developers have recognized the potential to transform a smart device into a medical device, by using a mobile medical application i.e. a mobile phone or a tablet. When initially conceived these mobile medical applications performed basic functions e.g. BMI calculator, accessing reference material etc.; however, increasing complexity offers clinicians and patients a range of functionality. As this complexity and functionality increases, so too does the potential risk associated with using such an application. Examples include any applications that provide the ability to inflate and deflate blood pressure cuffs, as well as applications that use patient-specific parameters and calculate dosage or create a dosage plan for radiation therapy. If an unapproved mobile medical application is marketed by a medical device organization, then they face significant penalties such as receiving an FDA warning letter to cease the prohibited activity, fines and possibility of facing a criminal conviction. Regulatory bodies have finalized guidance intended for mobile application developers to establish if their applications are subject to regulatory scrutiny. However, regulatory controls appear contradictory with the approaches taken by mobile application developers who generally work with short development cycles and very little documentation and as such, there is the potential to stifle further improvements due to these regulations. The research presented as part of this paper details how by adopting development techniques, such as agile software development, mobile medical application developers can meet regulatory requirements whilst still fostering innovation.

Keywords: Medical, mobile, applications, software Engineering, FDA, standards, regulations, agile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2025
208 Impact of Vehicle Travel Characteristics on Level of Service: A Comparative Analysis of Rural and Urban Freeways

Authors: Anwaar Ahmed, Muhammad Bilal Khurshid, Samuel Labi

Abstract:

The effect of trucks on the level of service is determined by considering passenger car equivalents (PCE) of trucks. The current version of Highway Capacity Manual (HCM) uses a single PCE value for all tucks combined. However, the composition of truck traffic varies from location to location; therefore, a single PCE value for all trucks may not correctly represent the impact of truck traffic at specific locations. Consequently, present study developed separate PCE values for single-unit and combination trucks to replace the single value provided in the HCM on different freeways. Site specific PCE values, were developed using concept of spatial lagging headways (that is the distance between rear bumpers of two vehicles in a traffic stream) measured from field traffic data. The study used data from four locations on a single urban freeway and three different rural freeways in Indiana. Three-stage-leastsquares (3SLS) regression techniques were used to generate models that predicted lagging headways for passenger cars, single unit trucks (SUT), and combination trucks (CT). The estimated PCE values for single-unit and combination truck for basic urban freeways (level terrain) were: 1.35 and 1.60, respectively. For rural freeways the estimated PCE values for single-unit and combination truck were: 1.30 and 1.45, respectively. As expected, traffic variables such as vehicle flow rates and speed have significant impacts on vehicle headways. Study results revealed that the use of separate PCE values for different truck classes can have significant influence on the LOS estimation.

Keywords: Level of Service, Capacity Analysis, Lagging Headway.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2056
207 Effect of Muscle Energy Technique on Anterior Pelvic Tilt in Lumbar Spondylosis Patients

Authors: Enas Elsayed Abutaleb, Mohamed Taher Eldesoky, Shahenda Abd El Rasol

Abstract:

Background: Muscle Energy Techniques (MET) have been widely used by manual therapists over the past years, but still limited research validated its use and there was limited evidence to substantiate the theories used to explain its effects. Objective: To investigate the effect of Muscle Energy Technique (MET) on anterior pelvic tilt in patients with lumbar spondylosis. Design: Randomized controlled trial. Subjects: Thirty patients with anterior pelvic tilt from both sexes were involved, aged between 35 to 50 years old and they were divided into MET and control groups with 15 patients in each. Methods: All patients received 3sessions/week for 4 weeks where the study group received MET, Ultrasound and Infrared, and the control group received U.S and I.R only. Pelvic angle was measured by palpation meter, pain severity by the visual analogue scale and functional disabilities by the Oswestry disability index. Results: Both groups showed significant improvement in all measured variables. The MET group was significantly better than the control group in pelvic angle, pain severity, and functional disability as p-value were (0.001, 0.0001, 0.0001) respectively. Conclusion and implication: the study group fulfilled greater improvement in all measured variables than the control group which implies that application of MET in combination with U.S and I.R were more effective in improving pelvic tilting angle, pain severity and functional disabilities than using electrotherapy only.

Keywords: Anterior pelvic tilt, lumbar spondylosis, muscle energy technique exercise, palpation meter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3976
206 Vibration Analysis of Gas Turbine SIEMENS 162MW - V94.2 Related to Iran Power Plant Industry in Fars Province

Authors: Omid A. Zargar

Abstract:

Vibration analysis of most critical equipment is considered as one of the most challenging activities in preventive maintenance. Utilities are heart of the process in big industrial plants like petrochemical zones. Vibration analysis methods and condition monitoring systems of these kinds of equipments are developed too much in recent years. On the other hand, there are too much operation factors like inlet and outlet pressures and temperatures that should be monitored. In this paper, some of the most effective concepts and techniques related to gas turbine vibration analysis are discussed. In addition, a gas turbine SIEMENS 162MW - V94.2 vibration case history related to Iran power industry in Fars province is explained. Vibration monitoring system and machinery technical specification are introduced. Besides, absolute and relative vibration trends, turbine and compressor orbits, Fast Fourier transform (FFT) in absolute vibrations, vibration modal analysis, turbine and compressor start up and shut down conditions, bode diagrams for relative vibrations, Nyquist diagrams and waterfall or three-dimensional FFT diagrams in startup and trip conditions are discussed with relative graphs. Furthermore, Split Resonance in gas turbines is discussed in details. Moreover, some updated vibration monitoring system, blade manufacturing technique and modern damping mechanism are discussed in this paper.

Keywords: Gas turbine, turbine compressor, vibration data collector, utility, condition monitoring, non-contact probe, Relative Vibration, Absolute Vibration, Split Resonance, Time Wave Form (TWF), Fast Fourier transform (FFT).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3605
205 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling

Authors: Erfan Niazi, Marianne Fenech

Abstract:

Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.

Keywords: Red blood cell, Rouleaux, microfluidics, image processing, population balance modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016
204 Self-Healing Phenomenon Evaluation in Cementitious Matrix with Different Water/Cement Ratios and Crack Opening Age

Authors: V. G. Cappellesso, D. M. G. da Silva, J. A. Arndt, N. dos Santos Petry, A. B. Masuero, D. C. C. Dal Molin

Abstract:

Concrete elements are subject to cracking, which can be an access point for deleterious agents that can trigger pathological manifestations reducing the service life of these structures. Finding ways to minimize or eliminate the effects of this aggressive agents’ penetration, such as the sealing of these cracks, is a manner of contributing to the durability of these structures. The cementitious self-healing phenomenon can be classified in two different processes. The autogenous self-healing that can be defined as a natural process in which the sealing of this cracks occurs without the stimulation of external agents, meaning, without different materials being added to the mixture, while on the other hand, the autonomous seal-healing phenomenon depends on the insertion of a specific engineered material added to the cement matrix in order to promote its recovery. This work aims to evaluate the autogenous self-healing of concretes produced with different water/cement ratios and exposed to wet/dry cycles, considering two ages of crack openings, 3 days and 28 days. The self-healing phenomenon was evaluated using two techniques: crack healing measurement using ultrasonic waves and image analysis performed with an optical microscope. It is possible to observe that by both methods, it possible to observe the self-healing phenomenon of the cracks. For young ages of crack openings and lower water/cement ratios, the self-healing capacity is higher when compared to advanced ages of crack openings and higher water/cement ratios. Regardless of the crack opening age, these concretes were found to stabilize the self-healing processes after 80 days or 90 days.

Keywords: Self-healing, autogenous, water/cement ratio, curing cycles, test methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 921
203 Investigation of VMAT Algorithms and Dosimetry

Authors: A. Taqaddas

Abstract:

Purpose: Planning and dosimetry of different VMAT algorithms (SmartArc, Ergo++, Autobeam) is compared with IMRT for Head and Neck Cancer patients. Modelling was performed to rule out the causes of discrepancies between planned and delivered dose. Methods: Five HNC patients previously treated with IMRT were re-planned with SmartArc (SA), Ergo++ and Autobeam. Plans were compared with each other and against IMRT and evaluated using DVHs for PTVs and OARs, delivery time, monitor units (MU) and dosimetric accuracy. Modelling of control point (CP) spacing, Leaf-end Separation and MLC/Aperture shape was performed to rule out causes of discrepancies between planned and delivered doses. Additionally estimated arc delivery times, overall plan generation times and effect of CP spacing and number of arcs on plan generation times were recorded. Results: Single arc SmartArc plans (SA4d) were generally better than IMRT and double arc plans (SA2Arcs) in terms of homogeneity and target coverage. Double arc plans seemed to have a positive role in achieving improved Conformity Index (CI) and better sparing of some Organs at Risk (OARs) compared to Step and Shoot IMRT (ss-IMRT) and SA4d. Overall Ergo++ plans achieved best CI for both PTVs. Dosimetric validation of all VMAT plans without modelling was found to be lower than ss-IMRT. Total MUs required for delivery were on average 19%, 30%, 10.6% and 6.5% lower than ss-IMRT for SA4d, SA2d (Single arc with 20 Gantry Spacing), SA2Arcs and Autobeam plans respectively. Autobeam was most efficient in terms of actual treatment delivery times whereas Ergo++ plans took longest to deliver. Conclusion: Overall SA single arc plans on average achieved best target coverage and homogeneity for both PTVs. SA2Arc plans showed improved CI and some OARs sparing. Very good dosimetric results were achieved with modelling. Ergo++ plans achieved best CI. Autobeam resulted in fastest treatment delivery times.

Keywords: Dosimetry, Intensity Modulated Radiotherapy, Optimization Algorithms, Volumetric Modulated Arc Therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3269
202 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy

Authors: P. Selva, B. Lorrain, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard

Abstract:

Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.

Keywords: Cruciform specimen, multiaxial fatigue, Nickelbased superalloy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161
201 Comparison of Polynomial and Radial Basis Kernel Functions based SVR and MLR in Modeling Mass Transfer by Vertical and Inclined Multiple Plunging Jets

Authors: S. Deswal, M. Pal

Abstract:

Presently various computational techniques are used in modeling and analyzing environmental engineering data. In the present study, an intra-comparison of polynomial and radial basis kernel functions based on Support Vector Regression and, in turn, an inter-comparison with Multi Linear Regression has been attempted in modeling mass transfer capacity of vertical (θ = 90O) and inclined (θ multiple plunging jets (varying from 1 to 16 numbers). The data set used in this study consists of four input parameters with a total of eighty eight cases, forty four each for vertical and inclined multiple plunging jets. For testing, tenfold cross validation was used. Correlation coefficient values of 0.971 and 0.981 along with corresponding root mean square error values of 0.0025 and 0.0020 were achieved by using polynomial and radial basis kernel functions based Support Vector Regression respectively. An intra-comparison suggests improved performance by radial basis function in comparison to polynomial kernel based Support Vector Regression. Further, an inter-comparison with Multi Linear Regression (correlation coefficient = 0.973 and root mean square error = 0.0024) reveals that radial basis kernel functions based Support Vector Regression performs better in modeling and estimating mass transfer by multiple plunging jets.

Keywords: Mass transfer, multiple plunging jets, polynomial and radial basis kernel functions, Support Vector Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1390
200 Interactive Effects in Blended Learning Mode: Exploring Hybrid Data Sources and Iterative Linkages

Authors: Hock Chuan, Lim

Abstract:

This paper presents an approach for identifying interactive effects using Network Science (NS) supported by Social Network Analysis (SNA) techniques. Based on general observations that learning processes and behaviors are shaped by the social relationships and influenced by learning environment, the central idea was to understand both the human and non-human interactive effects for a blended learning mode of delivery of computer science modules. Important findings include (a) the importance of non-human nodes to influence the centrality and transfer; (b) the degree of non-human and human connectivity impacts learning. This project reveals that the NS pattern and connectivity as measured by node relationships offer alternative approach for hypothesis generation and design of qualitative data collection. An iterative process further reinforces the analysis, whereas the experimental simulation option itself is an interesting alternative option, a hybrid combination of both experimental simulation and qualitative data collection presents itself as a promising and viable means to study complex scenario such as blended learning delivery mode. The primary value of this paper lies in the design of the approach for studying interactive effects of human (social nodes) and non-human (learning/study environment, Information and Communication Technologies (ICT) infrastructures nodes) components. In conclusion, this project adds to the understanding and the use of SNA to model and study interactive effects in blended social learning.

Keywords: Blended learning, network science, social learning, social network analysis, study environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 619
199 Automated Natural Hazard Zonation System with Internet-SMS Warning: Distributed GIS for Sustainable Societies Creating Schema & Interface for Mapping & Communication

Authors: Devanjan Bhattacharya, Jitka Komarkova

Abstract:

The research describes the implementation of a novel and stand-alone system for dynamic hazard warning. The system uses all existing infrastructure already in place like mobile networks, a laptop/PC and the small installation software. The geospatial dataset are the maps of a region which are again frugal. Hence there is no need to invest and it reaches everyone with a mobile. A novel architecture of hazard assessment and warning introduced where major technologies in ICT interfaced to give a unique WebGIS based dynamic real time geohazard warning communication system. A never before architecture introduced for integrating WebGIS with telecommunication technology. Existing technologies interfaced in a novel architectural design to address a neglected domain in a way never done before – through dynamically updatable WebGIS based warning communication. The work publishes new architecture and novelty in addressing hazard warning techniques in sustainable way and user friendly manner. Coupling of hazard zonation and hazard warning procedures into a single system has been shown. Generalized architecture for deciphering a range of geo-hazards has been developed. Hence the developmental work presented here can be summarized as the development of internet-SMS based automated geo-hazard warning communication system; integrating a warning communication system with a hazard evaluation system; interfacing different open-source technologies towards design and development of a warning system; modularization of different technologies towards development of a warning communication system; automated data creation, transformation and dissemination over different interfaces. The architecture of the developed warning system has been functionally automated as well as generalized enough that can be used for any hazard and setup requirement has been kept to a minimum.

Keywords: Geospatial, web-based GIS, geohazard, warning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
198 Improved Fuzzy Neural Modeling for Underwater Vehicles

Authors: O. Hassanein, Sreenatha G. Anavatti, Tapabrata Ray

Abstract:

The dynamics of the Autonomous Underwater Vehicles (AUVs) are highly nonlinear and time varying and the hydrodynamic coefficients of vehicles are difficult to estimate accurately because of the variations of these coefficients with different navigation conditions and external disturbances. This study presents the on-line system identification of AUV dynamics to obtain the coupled nonlinear dynamic model of AUV as a black box. This black box has an input-output relationship based upon on-line adaptive fuzzy model and adaptive neural fuzzy network (ANFN) model techniques to overcome the uncertain external disturbance and the difficulties of modelling the hydrodynamic forces of the AUVs instead of using the mathematical model with hydrodynamic parameters estimation. The models- parameters are adapted according to the back propagation algorithm based upon the error between the identified model and the actual output of the plant. The proposed ANFN model adopts a functional link neural network (FLNN) as the consequent part of the fuzzy rules. Thus, the consequent part of the ANFN model is a nonlinear combination of input variables. Fuzzy control system is applied to guide and control the AUV using both adaptive models and mathematical model. Simulation results show the superiority of the proposed adaptive neural fuzzy network (ANFN) model in tracking of the behavior of the AUV accurately even in the presence of noise and disturbance.

Keywords: AUV, AUV dynamic model, fuzzy control, fuzzy modelling, adaptive fuzzy control, back propagation, system identification, neural fuzzy model, FLNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2119
197 Surfactant Stabilized Nanoemulsion: Characterization and Application in Enhanced Oil Recovery

Authors: Ajay Mandal, Achinta Bera

Abstract:

Nanoemulsions are a class of emulsions with a droplet size in the range of 50–500 nm and have attracted a great deal of attention in recent years because it is unique characteristics. The physicochemical properties of nanoemulsion suggests that it can be successfully used to recover the residual oil which is trapped in the fine pore of reservoir rock by capillary forces after primary and secondary recovery. Oil-in-water nanoemulsion which can be formed by high-energy emulsification techniques using specific surfactants can reduce oil-water interfacial tension (IFT) by 3-4 orders of magnitude. The present work is aimed on characterization of oil-inwater nanoemulsion in terms of its phase behavior, morphological studies; interfacial energy; ability to reduce the interfacial tension and understanding the mechanisms of mobilization and displacement of entrapped oil blobs by lowering interfacial tension both at the macroscopic and microscopic level. In order to investigate the efficiency of oil-water nanoemulsion in enhanced oil recovery (EOR), experiments were performed to characterize the emulsion in terms of their physicochemical properties and size distribution of the dispersed oil droplet in water phase. Synthetic mineral oil and a series of surfactants were used to prepare oil-in-water emulsions. Characterization of emulsion shows that it follows pseudo-plastic behaviour and drop size of dispersed oil phase follows lognormal distribution. Flooding experiments were also carried out in a sandpack system to evaluate the effectiveness of the nanoemulsion as displacing fluid for enhanced oil recovery. Substantial additional recoveries (more than 25% of original oil in place) over conventional water flooding were obtained in the present investigation.

Keywords: Nanoemulsion, Characterization, Enhanced Oil Recovery, Particle Size Distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4981
196 Rare Earth Elements in Soils of Jharia Coal Field

Authors: R. E. Masto, L. C. Ram, S. K. Verma, V. A. Selvi, J. George, R. C. Tripathi, N. K. Srivastava, D. Mohanty, S. K.Jha, A. K. Sinha, A. Sinha

Abstract:

There are many sources trough which the soil get enriched and contaminated with REEs. The determination of REEs in environmental samples has been limited because of the lack of sensitive analytical techniques. Soil samples were collected from four sites including open cast coal mine, natural coal burning, coal washery and control in the coal field located in Dhanbad, India. Total concentrations of rare earth elements (REEs) were determined using the inductively coupled plasma atomic absorption spectrometry in order to assess enrichment status in the coal field. Results showed that the mean concentrations of La, Pr, Eu, Tb, Ho, and Tm in open cast mine and natural coal burning sites were elevated compared to the reference concentrations, while Ce, Nd, Sm, and Gd were elevated in coal washery site. When compared to reference soil, heavy REEs (HREEs) were enriched in open cast mines and natural coal burning affected soils, however, the HREEs were depleted in the coal washery sites. But, the Chondrite-normalization diagram showed significant enrichment for light REEs (LREEs) in all the soils. High concentration of Pr, Eu, Tb, Ho, Tm, and Lu in coal mining and coal burning sites may pose human health risks. Factor analysis showed that distribution and relative abundance of REEs of the coal washery site is comparable with the control. Eventually washing or cleaning of coal could significantly decrease the emission of REEs from coal into the environment.

Keywords: Rare earth elements, coal, soil, factor analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2789
195 Windphil Poetic in Architecture: Energy Efficient Strategies in Modern Buildings of Iran

Authors: Sepideh Samadzadehyazdi, Mohammad Javad Khalili, Sarvenaz Samadzadehyazdi, Mohammad Javad Mahdavinejad

Abstract:

The term ‘Windphil Architecture’ refers to the building that facilitates natural ventilation by architectural elements. Natural ventilation uses the natural forces of wind pressure and stacks effect to direct the movement of air through buildings. Natural ventilation is increasingly being used in contemporary buildings to minimize the consumption of non-renewable energy and it is an effective way to improve indoor air quality. The main objective of this paper is to identify the strategies of using natural ventilation in Iranian modern buildings. In this regard, the research method is ‘descriptive-analytical’ that is based on comparative techniques. To simulate wind flow in the interior spaces of case studies, FLUENT software has been used. Research achievements show that it is possible to use natural ventilation to create a thermally comfortable indoor environment. The natural ventilation strategies could be classified into two groups of environmental characteristics such as public space structure, and architectural characteristics including building form and orientation, openings, central courtyards, wind catchers, roof, wall wings, semi-open spaces and the heat capacity of materials. Having investigated modern buildings of Iran, innovative elements like wind catchers and wall wings are less used than the traditional architecture. Instead, passive ventilation strategies have been more considered in the building design as for the roof structure and openings.

Keywords: Natural ventilation strategies, wind catchers, wind flow, Iranian modern buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1007
194 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based On Local Color Histograms

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.

Keywords: CBIR, Color Global Histogram, Color Local Histogram, Weak Segmentation, Euclidean Distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690
193 Bio-Surfactant Production and Its Application in Microbial EOR

Authors: A. Rajesh Kanna, G. Suresh Kumar, Sathyanaryana N. Gummadi

Abstract:

There are various sources of energies available worldwide and among them, crude oil plays a vital role. Oil recovery is achieved using conventional primary and secondary recovery methods. In-order to recover the remaining residual oil, technologies like Enhanced Oil Recovery (EOR) are utilized which is also known as tertiary recovery. Among EOR, Microbial enhanced oil recovery (MEOR) is a technique which enables the improvement of oil recovery by injection of bio-surfactant produced by microorganisms. Bio-surfactant can retrieve unrecoverable oil from the cap rock which is held by high capillary force. Bio-surfactant is a surface active agent which can reduce the interfacial tension and reduce viscosity of oil and thereby oil can be recovered to the surface as the mobility of the oil is increased. Research in this area has shown promising results besides the method is echo-friendly and cost effective compared with other EOR techniques. In our research, on laboratory scale we produced bio-surfactant using the strain Pseudomonas putida (MTCC 2467) and injected into designed simple sand packed column which resembles actual petroleum reservoir. The experiment was conducted in order to determine the efficiency of produced bio-surfactant in oil recovery. The column was made of plastic material with 10 cm in length. The diameter was 2.5 cm. The column was packed with fine sand material. Sand was saturated with brine initially followed by oil saturation. Water flooding followed by bio-surfactant injection was done to determine the amount of oil recovered. Further, the injection of bio-surfactant volume was varied and checked how effectively oil recovery can be achieved. A comparative study was also done by injecting Triton X 100 which is one of the chemical surfactant. Since, bio-surfactant reduced surface and interfacial tension oil can be easily recovered from the porous sand packed column.

Keywords: Bio-surfactant, Bacteria, Interfacial tension, Sand column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2740
192 Perception of Neighbourhood-Level Built Environment in Relation to Youth Physical Activity in Malaysia

Authors: A. Abdullah, N. Faghih Mirzaei, S. Hany Haron

Abstract:

Neighbourhood environment walkability on reported physical activity (PA) levels of students of Universiti Sains Malaysia (USM) in Malaysia. Compared with previous generations, today’s young people spend less time playing outdoors and have lower participation rates in PA. Research suggests that negative perceptions of neighbourhood walkability may be a potential barrier to adolescents’ PA. The sample consisted of 200 USM students (to 24 years old) who live outside of the main campus and engage in PA in sport halls and sport fields of USM. The data were analysed using the t-test, binary logistic regression, and discriminant analysis techniques. The present study found that youth PA was affected by neighbourhood environment walkability factors, including neighbourhood infrastructures, neighbourhood safety (crime), and recreation facilities, as well as street characteristics and neighbourhood design variables such as facades of sidewalks, roadside trees, green spaces, and aesthetics. The finding also illustrated that active students were influenced by street connectivity, neighbourhood infrastructures, recreation facilities, facades of sidewalks, and aesthetics, whereas students in the less active group were affected by access to destinations, neighbourhood safety (crime), and roadside trees and green spaces for their PAs. These results report which factors of built environments have more effect on youth PA and they message to the public to create more awareness about the benefits of PA on youth health.

Keywords: Fear of crime, neighbourhood built environment, physical activities, street characteristics design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1334
191 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: G. Candel, D. Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 446
190 Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem

Authors: First G.M. Karthik, Second Ramachandra.V.Pujeri, Dr.

Abstract:

Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.

Keywords: Constraint Based Mining, FP tree, Data mining, GCS problem, CBFP mining technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
189 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 394
188 Optimization of Mechanical Properties of Alginate Hydrogel for 3D Bio-Printing Self-Standing Scaffold Architecture for Tissue Engineering Applications

Authors: Ibtisam A. Abbas Al-Darkazly

Abstract:

In this study, the mechanical properties of alginate hydrogel material for self-standing 3D scaffold architecture with proper shape fidelity are investigated. In-lab built 3D bio-printer extrusion-based technology is utilized to fabricate 3D alginate scaffold constructs. The pressure, needle speed and stage speed are varied using a computer-controlled system. The experimental result indicates that the concentration of alginate solution, calcium chloride (CaCl2) cross-linking concentration and cross-linking ratios lead to the formation of alginate hydrogel with various gelation states. Besides, the gelling conditions, such as cross-linking reaction time and temperature also have a significant effect on the mechanical properties of alginate hydrogel. Various experimental tests such as the material gelation, the material spreading and the printability test for filament collapse as well as the swelling test were conducted to evaluate the fabricated 3D scaffold constructs. The result indicates that the fabricated 3D scaffold from composition of 3.5% wt alginate solution, that is prepared in DI water and 1% wt CaCl2 solution with cross-linking ratios of 7:3 show good printability and sustain good shape fidelity for more than 20 days, compared to alginate hydrogel that is prepared in a phosphate buffered saline (PBS). The fabricated self-standing 3D scaffold constructs measured 30 mm × 30 mm and consisted of 4 layers (n = 4) show good pore geometry and clear grid structure after printing. In addition, the percentage change of swelling degree exhibits high swelling capability with respect to time. The swelling test shows that the geometry of 3D alginate-scaffold construct and of the macro-pore are rarely changed, which indicates the capability of holding the shape fidelity during the incubation period. This study demonstrated that the mechanical and physical properties of alginate hydrogel could be tuned for a 3D bio-printing extrusion-based system to fabricate self-standing 3D scaffold soft structures. This 3D bioengineered scaffold provides a natural microenvironment present in the extracellular matrix of the tissue, which could be seeded with the biological cells to generate the desired 3D live tissue model for in vitro and in vivo tissue engineering applications.

Keywords: Biomaterial, calcium chloride, 3D bio-printing, extrusion, scaffold, sodium alginate, tissue engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 711