Search results for: number of iteration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3700

Search results for: number of iteration

490 Changing the Way South Africa Think about Parking Provision at Tertiary Institutions

Authors: M. C. Venter, G. Hitge, S. C. Krygsman, J. Thiart

Abstract:

For decades, South Africa has been planning transportation systems from a supply, rather than a demand side, perspective. In terms of parking, this relates to requiring the minimum parking provision that is enforced by city officials. Newer insight is starting to indicate that South Africa needs to re-think this philosophy in light of a new policy environment that desires a different outcome. Urban policies have shifted from reliance on the private car for access, to employing a wide range of alternative modes. Car dominated travel is influenced by various parameters, of which the availability and location of parking plays a significant role. The question is therefore, what is the right strategy to achieve the desired transport outcomes for SA. The focus of this paper is used to assess this issue with regard to parking provision, and specifically at a tertiary institution. A parking audit was conducted at the Stellenbosch campus of Stellenbosch University, monitoring occupancy at all 60 parking areas, every hour during business hours over a five-day period. The data from this survey was compared with the prescribed number of parking bays according to the Stellenbosch Municipality zoning scheme (requiring a minimum of 0.4 bays per student). The analysis shows that by providing 0.09 bays per student, the maximum total daily occupation of all the parking areas did not exceed an 80% occupation rate. It is concluded that the prevailing parking standards are not supportive of the new urban and transport policy environment, but that it is extremely conservative from a practical demand point of view.

Keywords: Parking provision, parking requirements, travel behaviour, travel demand management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 681
489 Investigation of Layer Thickness and Surface Roughness on Aerodynamic Coefficients of Wind Tunnel RP Models

Authors: S. Daneshmand, A. Ahmadi Nadooshan, C. Aghanajafi

Abstract:

Traditional wind tunnel models are meticulously machined from metal in a process that can take several months. While very precise, the manufacturing process is too slow to assess a new design's feasibility quickly. Rapid prototyping technology makes this concurrent study of air vehicle concepts via computer simulation and in the wind tunnel possible. This paper described the Affects layer thickness models product with rapid prototyping on Aerodynamic Coefficients for Constructed wind tunnel testing models. Three models were evaluated. The first model was a 0.05mm layer thickness and Horizontal plane 0.1μm (Ra) second model was a 0.125mm layer thickness and Horizontal plane 0.22μm (Ra) third model was a 0.15mm layer thickness and Horizontal plane 4.6μm (Ra). These models were fabricated from somos 18420 by a stereolithography (SLA). A wing-body-tail configuration was chosen for the actual study. Testing covered the Mach range of Mach 0.3 to Mach 0.9 at an angle-of-attack range of -2° to +12° at zero sideslip. Coefficients of normal force, axial force, pitching moment, and lift over drag are shown at each of these Mach numbers. Results from this study show that layer thickness does have an effect on the aerodynamic characteristics in general; the data differ between the three models by fewer than 5%. The layer thickness does have more effect on the aerodynamic characteristics when Mach number is decreased and had most effect on the aerodynamic characteristics of axial force and its derivative coefficients.

Keywords: Aerodynamic characteristics, stereolithography, layer thickness, Rapid prototyping, surface finish.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2931
488 Computational Fluid Dynamics Simulation Approach for Developing a Powder Dispensing Device

Authors: Rallapalli Revanth, Shivakumar Bhavi, Vijay Kumar Turaga

Abstract:

Dispensing powders manually can be difficult as it requires to gradually pour and check the amount on the scale to be dispensed. Current systems are manual and non-continuous in nature and is user dependent and it is also difficult to control powder dispensation. Recurrent dosing of powdered medicines in precise amounts quickly and accurately has been an all-time challenge. Various powder dispensing mechanisms are being designed to overcome these challenges. Battery operated screw conveyor mechanism is being innovated to overcome above problems faced. These inventions are numerically evaluated at concept development level by employing Computational Fluid Dynamics (CFD) of gas-solids multiphase flow systems. CFD has been very helpful in the development of such devices, saving time and money by reducing the number of prototypes and testing. In this study, powder dispensation from the trocar's end is simulated by using the Dense Discrete Phase Model technique along with Kinetic Theory of Granular Flow. The powder is viewed as a secondary flow in air (DDPM-KTGF). By considering the volume fraction of powder as 50%, the transportation side is done by rotation of the screw conveyor. The performance is calculated for 1 sec time frame in an unsteady computation manner. This methodology will help designers in developing design concepts to improve the dispensation and the effective area within a quick turnaround time frame.

Keywords: Multiphase flow, screw conveyor, transient, DDPM - KTGF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 373
487 Enhanced Differentiation of Stromal Cells and Embryonic Stem Cells with Vitamin D3

Authors: Mayada Alqaisi, Nasser Al-Shanti, Quiyu Wang, William S. Gilmore

Abstract:

In-vitro mouse co-culture of E14 embryonic stem cells (ESCs) and OP9 stromal cells can recapitulate the earliest stages of haematopoietic development, not accessible in human embryos, supporting both haemogenic precursors and their primitive haematopoietic progeny. 1α, 25-Dihydroxy-vitamin D3 (VD3) has been demonstrated to be a powerful differentiation inducer for a wide variety of neoplastic cells, and could enhance early differentiation of ESCs into blood cells in E14/OP9 co-culture. This study aims to ascertain whether VD3 is key in promoting differentiation and suppressing proliferation, by separately investigating the effects of VD3 on the proliferation phase of the E14 cell line and on stromal OP9 cells.The results showed that VD3 inhibited the proliferation of the cells in a dose-dependent manner, quantitatively by decreased cell number, and qualitatively by alkaline-phosphatase staining that revealed significant differences between VD3-treated and untreated cells, characterised by decreased enzyme expression (colourless cells). Propidium-iodide cell-cycle analyses showed no significant percentage change in VD3-treated E14 and OP9 cells within their G and S-phases, compared to the untreated controls, despite the increased percentage of G-phase compared to the S-phase in a dosedependent manner. These results with E14 and OP9 cells indicate that adequate VD3 concentration enhances cellular differentiation and inhibits proliferation. The results also suggest that if E14 and OP9 cells were co-cultured andVD3-treated, there would be furtherenhanced differentiation of ESCs into blood cells.

Keywords: Differentiation, embryonic stem cells, OP9 stromal cells, , 25-dihydroxy-vitamin D3

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877
486 Context Aware Lightweight Energy Efficient Framework

Authors: D. Sathan, A. Meetoo, R. K. Subramaniam

Abstract:

Context awareness is a capability whereby mobile computing devices can sense their physical environment and adapt their behavior accordingly. The term context-awareness, in ubiquitous computing, was introduced by Schilit in 1994 and has become one of the most exciting concepts in early 21st-century computing, fueled by recent developments in pervasive computing (i.e. mobile and ubiquitous computing). These include computing devices worn by users, embedded devices, smart appliances, sensors surrounding users and a variety of wireless networking technologies. Context-aware applications use context information to adapt interfaces, tailor the set of application-relevant data, increase the precision of information retrieval, discover services, make the user interaction implicit, or build smart environments. For example: A context aware mobile phone will know that the user is currently in a meeting room, and reject any unimportant calls. One of the major challenges in providing users with context-aware services lies in continuously monitoring their contexts based on numerous sensors connected to the context aware system through wireless communication. A number of context aware frameworks based on sensors have been proposed, but many of them have neglected the fact that monitoring with sensors imposes heavy workloads on ubiquitous devices with limited computing power and battery. In this paper, we present CALEEF, a lightweight and energy efficient context aware framework for resource limited ubiquitous devices.

Keywords: Context-Aware, Energy-Efficient, Lightweight, Ubiquitous Devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947
485 Web Proxy Detection via Bipartite Graphs and One-Mode Projections

Authors: Zhipeng Chen, Peng Zhang, Qingyun Liu, Li Guo

Abstract:

With the Internet becoming the dominant channel for business and life, many IPs are increasingly masked using web proxies for illegal purposes such as propagating malware, impersonate phishing pages to steal sensitive data or redirect victims to other malicious targets. Moreover, as Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to detect the proxy service due to their dynamic update and high anonymity. In this paper, we present an approach based on behavioral graph analysis to study the behavior similarity of web proxy users. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of web proxy users. Based on the similarity matrices of end-users from the derived one-mode projection graphs, we apply a simple yet effective spectral clustering algorithm to discover the inherent web proxy users behavior clusters. The web proxy URL may vary from time to time. Still, the inherent interest would not. So, based on the intuition, by dint of our private tools implemented by WebDriver, we examine whether the top URLs visited by the web proxy users are web proxies. Our experiment results based on real datasets show that the behavior clusters not only reduce the number of URLs analysis but also provide an effective way to detect the web proxies, especially for the unknown web proxies.

Keywords: Bipartite graph, clustering, one-mode projection, web proxy detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747
484 Effect of Shell Dimensions on Buckling Behavior and Entropy Generation of Thin Welded Shells

Authors: Sima Ziaee, Khosro Jafarpur

Abstract:

Among all mechanical joining processes, welding has been employed for its advantage in design flexibility, cost saving, reduced overall weight and enhanced structural performance. However, for structures made of relatively thin components, welding can introduce significant buckling distortion which causes loss of dimensional control, structural integrity and increased fabrication costs. Different parameters can affect buckling behavior of welded thin structures such as, heat input, welding sequence, dimension of structure. In this work, a 3-D thermo elastic-viscoplastic finite element analysis technique is applied to evaluate the effect of shell dimensions on buckling behavior and entropy generation of welded thin shells. Also, in the present work, the approximated longitudinal transient stresses which produced in each time step, is applied to the 3D-eigenvalue analysis to ratify predicted buckling time and corresponding eigenmode. Besides, the possibility of buckling prediction by entropy generation at each time is investigated and it is found that one can predict time of buckling with drawing entropy generation versus out of plane deformation. The results of finite element analysis show that the length, span and thickness of welded thin shells affect the number of local buckling, mode shape of global buckling and post-buckling behavior of welded thin shells.

Keywords: Buckling behavior, Elastic viscoplastic model, Entropy generation, Finite element method, Shell dimensions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
483 Awareness and Attitudes of Primary Grade Teachers (1-4thGrade) towards Inclusive Education

Authors: P. Maheshwari, M. Shapurkar

Abstract:

The present research aimed at studying the awareness and attitudes of teachers towards inclusive education. The sample consisted of 60 teachers, teaching in the primary section (1st – 4th) of regular schools affiliated to the SSC board in Mumbai. Sample was selected by Multi-stage cluster sampling technique. A semi-structured self-constructed interview schedule and a self-constructed attitude scale was used to study the awareness of teachers about disability and Inclusive education, and their attitudes towards inclusive education respectively. Themes were extracted from the interview data and quantitative data was analyzed using SPSS package. Results revealed that teachers had some amount of awareness but an inadequate amount of information on disabilities and inclusive education. Disability to most (37) teachers meant “an inability to do something”. The difference between disability and handicap was stated by most as former being cognitive while handicap being physical in nature. With regard to Inclusive education, a large number (46) stated that they were unaware of the term and did not know what it meant. Majority (52) of them perceived maximum challenges for themselves in an inclusive set up, and emphasized on the role of teacher training courses in the area of providing knowledge (49) and training in teaching methodology (53). Although, 83.3% of teachers held a moderately positive attitude towards inclusive education, a large percentage (61.6%) of participants felt that being in inclusive set up would be very challenging for both children with special needs and without special needs. Though, most (49) of the teachers stated that children with special needs should be educated in regular classroom but they further clarified that only those should be in a regular classroom who have physical impairments of mild or moderate degree.

Keywords: Attitudes, awareness, inclusive education, teachers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3396
482 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls

Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu

Abstract:

Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.

Keywords: Android, permissions combination, API calls, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
481 Image Transmission via Iterative Cellular-Turbo System

Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan

Abstract:

To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.

Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
480 Urban Renewal from the Perspective of Industrial Heritage Protection: Taking the Qiaokou District of Wuhan as an Example

Authors: Yue Sun, Yuan Wang

Abstract:

Most of the earliest national industries in Wuhan are located along the Hanjiang River, and Qiaokou is considered to be a gathering place for Dahankou old industrial base. Zongguan Waterworks, Pacific Soap Factory, Fuxin Flour Factory, Nanyang Tobacco Factory and other hundred-year-old factories are located along Hanjiang River in Qiaokou District, especially the Gutian Industrial Zone, which was listed as one of 156 national restoration projects at the beginning of the founding of the People’s Republic of China. After decades of development, Qiaokou has become the gathering place of the chemical industry and secondary industry, causing damage to the city and serious pollution, becoming a marginalized area forgotten by the central city. In recent years, with the accelerated pace of urban renewal, Qiaokou has been constantly reforming and innovating, and has begun drastic changes in the transformation of old cities and the development of new districts. These factories have been listed as key reconstruction projects, and a large number of industrial heritage with historical value and full urban memory have been relocated, demolished and reformed, with only a few factory buildings preserved. Through the methods of industrial archaeology, image analysis, typology and field investigation, this paper analyzes and summarizes the spatial characteristics of industrial heritage in Qiaokou District, explores urban renewal from the perspective of industrial heritage protection, and provides design strategies for the regeneration of urban industrial sites and industrial heritage.

Keywords: Industrial heritage, urban renewal, protection, urban memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 981
479 The Effects of Food Deprivation on Hematological Indices and Blood Indicators of Liver Function in Oxyleotris marmorata

Authors: N. Sridee, S. Boonanuntanasarn

Abstract:

Oxyleotris marmorata is considered as undomesticated fish, and its culture occasionally faces a problem of food deprivation. The present study aims to evaluate alteration of hematological indices, blood chemical associated with liver function during 4 weeks of fasting. A non-linear relationships between fasting days and hematological parameters (red blood cell number; y = - 0.002x2 + 0.041x + 1.249; R2=0.915, P<0.05, hemoglobin; y = - 0.002x2 + 0.030x + 3.470; R2=0.460, P>0.05), mean corpuscular volume; y = -0.180x2 + 2.183x + 149.61; R2=0.732, P>0.05, mean corpuscular hemoglobin; y = -0.041x2 + 0.862x + 29.864; R2=0.818, P>0.05 and mean corpuscular hemoglobin concentration; y = - 0.044x2 + 0.711x + 21.580; R2=0.730, P>0.05) were demonstrated. Significant change in hematocrit (Ht) during fasting period was observed. Ht elevated sharply increase at the first weeks of fasting period. Higher Ht also was detected during week 2-4 of fasting time. The significant reduction of hepatosomatic index was observed (y = - 0.007x2 - 0.096x + 1.414; R2=0.968, P<0.05). Moreover, alteration of enzyme associated with liver function was evaluated during 4 weeks of fasting (alkalin phosphatase; y = -0.026x2 - 0.935x + 12.188; R2=0.737, P>0.05, serum glutamic oxaloacetic transaminase; y = 0.005x2 – 0.201x2 + 1.297x + 33.256; R2=1, P<0.01, serum glutamic pyruvic transaminase; y = 0.007x2 – 0.274x2 + 2.277x + 25.257; R2=0.807, P>0.05). Taken together, prolonged fasting has deleterious effects on hematological indices, liver mass and enzyme associated in liver function. The marked adverse effects occurred after the first week of fasting state.

Keywords: food deprivation, Oxyleotris marmorata, hematology, alkaline phosphatase, SGOT, SGPT

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
478 Structural Analysis of Stiffened FGM Thick Walled Cylinders by Application of a New Cylindrical Super Element

Authors: S. A. Moeini, M. T.Ahmadian

Abstract:

Structural behavior of ring stiffened thick walled cylinders made of functionally graded materials (FGMs) is investigated in this paper. Functionally graded materials are inhomogeneous composites which are usually made from a mixture of metal and ceramic. The gradient compositional variation of the constituents from one surface to the other provides an elegant solution to the problem of high transverse shear stresses that are induced when two dissimilar materials with large differences in material properties are bonded. FGM formation of the cylinder is modeled by power-law exponent and the variation of characteristics is supposed to be in radial direction. A finite element formulation is derived for the analysis. According to the property variation of the constituent materials in the radial direction of the wall, it is not convenient to use conventional elements to model and analyze the structure of the stiffened FGM cylinders. In this paper a new cylindrical super-element is used to model the finite element formulation and analyze the static and modal behavior of stiffened FGM thick walled cylinders. By using this super-element the number of elements, which are needed for modeling, will reduce significantly and the process time is less in comparison with conventional finite element formulations. Results for static and modal analysis are evaluated and verified by comparison to finite element formulation with conventional elements. Comparison indicates a good conformity between results.

Keywords: FGMs, Modal analysis, Static analysis, Stiffened cylinders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2318
477 Preliminary Results of In-Vitro Skin Tissue Soldering using Gold Nanoshells and ICG Combination

Authors: M. S. Nourbakhsh, M. E. Khosroshahi

Abstract:

Laser soldering is based on applying some soldering material (albumin) onto the approximated edges of the cut and heating the solder (and the underlying tissues) by a laser beam. Endogenous and exogenous materials such as indocyanine green (ICG) are often added to solders to enhance light absorption. Gold nanoshells are new materials which have an optical response dictated by the plasmon resonance. The wavelength at which the resonance occurs depends on the core and shell sizes, allowing nanoshells to be tailored for particular applications. The purposes of this study was use combination of ICG and different concentration of gold nanoshells for skin tissue soldering and also to examine the effect of laser soldering parameters on the properties of repaired skin. Two mixtures of albumin solder and different combinations of ICG and gold nanoshells were prepared. A full thickness incision of 2×20 mm2 was made on the surface and after addition of mixtures it was irradiated by an 810nm diode laser at different power densities. The changes of tensile strength σt due to temperature rise, number of scan (Ns), and scan velocity (Vs) were investigated. The results showed at constant laser power density (I), σt of repaired incisions increases by increasing the concentration of gold nanoshells in solder, Ns and decreasing Vs. It is therefore important to consider the tradeoff between the scan velocity and the surface temperature for achieving an optimum operating condition. In our case this corresponds to σt =1800 gr/cm2 at I~ 47 Wcm-2, T ~ 85ºC, Ns =10 and Vs=0.3mms-1.

Keywords: Tissue soldering, gold nanoshells, indocyanine green, combination, tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1502
476 Statistical Modeling of Accelerated Pavement Failure Using Response Surface Methodology

Authors: Anshu Manik, Kasthurirangan Gopalakrishnan, Siddhartha K. Khaitan

Abstract:

Rutting is one of the major load-related distresses in airport flexible pavements. Rutting in paving materials develop gradually with an increasing number of load applications, usually appearing as longitudinal depressions in the wheel paths and it may be accompanied by small upheavals to the sides. Significant research has been conducted to determine the factors which affect rutting and how they can be controlled. Using the experimental design concepts, a series of tests can be conducted while varying levels of different parameters, which could be the cause for rutting in airport flexible pavements. If proper experimental design is done, the results obtained from these tests can give a better insight into the causes of rutting and the presence of interactions and synergisms among the system variables which have influence on rutting. Although traditionally, laboratory experiments are conducted in a controlled fashion to understand the statistical interaction of variables in such situations, this study is an attempt to identify the critical system variables influencing airport flexible pavement rut depth from a statistical DoE perspective using real field data from a full-scale test facility. The test results do strongly indicate that the response (rut depth) has too much noise in it and it would not allow determination of a good model. From a statistical DoE perspective, two major changes proposed for this experiment are: (1) actual replication of the tests is definitely required, (2) nuisance variables need to be identified and blocked properly. Further investigation is necessary to determine possible sources of noise in the experiment.

Keywords: Airport Pavement, Design of Experiments, Rutting, NAPTF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
475 Using Environmental Sensitivity Index (ESI) to Assess and Manage Environmental Risks of Pipelines in GIS Environment: A Case Study ofa Near Coastline and Fragile Ecosystem Located Pipeline

Authors: Jahangir Jafari, Nematollah Khorasani, Afshin Danehkar

Abstract:

Having a very many number of pipelines all over the country, Iran is one of the countries consists of various ecosystems with variable degrees of fragility and robusticity as well as geographical conditions. This study presents a state-of-the-art method to estimate environmental risks of pipelines by recommending rational equations including FES, URAS, SRS, RRS, DRS, LURS and IRS as well as FRS to calculate the risks. This study was carried out by a relative semi-quantitative approach based on land uses and HVAs (High-Value Areas). GIS as a tool was used to create proper maps regarding the environmental risks, land uses and distances. The main logic for using the formulas was the distance-based approaches and ESI as well as intersections. Summarizing the results of the study, a risk geographical map based on the ESIs and final risk score (FRS) was created. The study results showed that the most sensitive and so of high risk area would be an area comprising of mangrove forests located in the pipeline neighborhood. Also, salty lands were the most robust land use units in the case of pipeline failure circumstances. Besides, using a state-of-the-art method, it showed that mapping the risks of pipelines out with the applied method is of more reliability and convenience as well as relative comprehensiveness in comparison to present non-holistic methods for assessing the environmental risks of pipelines. The focus of the present study is “assessment" than that of “management". It is suggested that new policies are to be implemented to reduce the negative effects of the pipeline that has not yet been constructed completely

Keywords: ERM, ESI, ERA, Pipeline, Assalouyeh

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2171
474 A Simplified and Effective Algorithm Used to Mine Similar Processes: An Illustrated Example

Authors: Min-Hsun Kuo, Yun-Shiow Chen

Abstract:

The running logs of a process hold valuable information about its executed activity behavior and generated activity logic structure. Theses informative logs can be extracted, analyzed and utilized to improve the efficiencies of the process's execution and conduction. One of the techniques used to accomplish the process improvement is called as process mining. To mine similar processes is such an improvement mission in process mining. Rather than directly mining similar processes using a single comparing coefficient or a complicate fitness function, this paper presents a simplified heuristic process mining algorithm with two similarity comparisons that are able to relatively conform the activity logic sequences (traces) of mining processes with those of a normalized (regularized) one. The relative process conformance is to find which of the mining processes match the required activity sequences and relationships, further for necessary and sufficient applications of the mined processes to process improvements. One similarity presented is defined by the relationships in terms of the number of similar activity sequences existing in different processes; another similarity expresses the degree of the similar (identical) activity sequences among the conforming processes. Since these two similarities are with respect to certain typical behavior (activity sequences) occurred in an entire process, the common problems, such as the inappropriateness of an absolute comparison and the incapability of an intrinsic information elicitation, which are often appeared in other process conforming techniques, can be solved by the relative process comparison presented in this paper. To demonstrate the potentiality of the proposed algorithm, a numerical example is illustrated.

Keywords: process mining, process similarity, artificial intelligence, process conformance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
473 Expert Based System Design for Integrated Waste Management

Authors: A. Buruzs, M. F. Hatwágner, A. Torma, L. T. Kóczy

Abstract:

Recently, an increasing number of researchers have been focusing on working out realistic solutions to sustainability problems. As sustainability issues gain higher importance for organisations, the management of such decisions becomes critical. Knowledge representation is a fundamental issue of complex knowledge based systems. Many types of sustainability problems would benefit from models based on experts’ knowledge. Cognitive maps have been used for analyzing and aiding decision making. A cognitive map can be made of almost any system or problem. A fuzzy cognitive map (FCM) can successfully represent knowledge and human experience, introducing concepts to represent the essential elements and the cause and effect relationships among the concepts to model the behaviour of any system. Integrated waste management systems (IWMS) are complex systems that can be decomposed to non-related and related subsystems and elements, where many factors have to be taken into consideration that may be complementary, contradictory, and competitive; these factors influence each other and determine the overall decision process of the system. The goal of the present paper is to construct an efficient IWMS which considers various factors. The authors’ intention is to propose an expert based system design approach for implementing expert decision support in the area of IWMSs and introduces an appropriate methodology for the development and analysis of group FCM. A framework for such a methodology consisting of the development and application phases is presented.

Keywords: Factors, fuzzy cognitive map, group decision, integrated waste management system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1964
472 Concrete Mix Design Using Neural Network

Authors: Rama Shanker, Anil Kumar Sachan

Abstract:

Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.

Keywords: Aggregate Proportions, Artificial Neural Network, Concrete Grade, Concrete Mix Design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2638
471 Patient’s Knowledge and Use of Sublingual Glyceryl Trinitrate Therapy in Taiping Hospital, Malaysia

Authors: Wan Azuati Wan Omar, Selva Rani John Jasudass, Siti Rohaiza Md Saad

Abstract:

Background: The objectives of this study were to assess patient’s knowledge of appropriate sublingual glyceryl trinitrate (GTN) use as well as to investigate how patients commonly store and carry their sublingual GTN tablets. Methodology: This was a cross-sectional survey, using a validated researcher-administered questionnaire. The study involved cardiac patients receiving sublingual GTN attending the outpatient and inpatient departments of Taiping Hospital, a non-academic public care hospital. The minimum calculated sample size was 92, but 100 patients were conveniently sampled. Respondents were interviewed on 3 areas, including demographic data, knowledge and use of sublingual GTN. Eight items were used to calculate each subject’s knowledge score and six items were used to calculate use score. Results: Of the 96 patients who consented to participate, majority (96.9%) were well aware of the indication of sublingual GTN. With regards to the mechanism of action of sublingual GTN, 73 (76%) patients did not know how the medication works. Majority of the patients (66.7%) knew about the proper storage of the tablet. In relation to the maximum number of sublingual GTN tablets that can be taken during each angina episode, 36.5% did not know that up to 3 tablets of sublingual GTN can be taken during each episode of angina. Fifty four (56.2%) patients were not aware that they need to replace sublingual GTN every 8 weeks after receiving the tablets. Majority (69.8%) of the patients demonstrated lack of knowledge with regards to the use of sublingual GTN as prevention of chest pain. Conclusion: Overall, patients’ knowledge regarding the self-administration of sublingual GTN is still inadequate. The findings support the need for more frequent reinforcement of patient education, especially in the areas of preventive use, storage and drug stability.

Keywords: Glyceryl trinitrate, knowledge, adherence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3505
470 Evolutionary Approach for Automated Discovery of Censored Production Rules

Authors: Kamal K. Bharadwaj, Basheer M. Al-Maqaleh

Abstract:

In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Censored Production Rule, Data Mining, MachineLearning, Evolutionary Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
469 Natural Ventilation for the Sustainable Tall Office Buildings of the Future

Authors: Ayşin Sev, Görkem Aslan

Abstract:

Sustainable tall buildings that provide comfortable, healthy and efficient indoor environments are clearly desirable as the densification of living and working space for the world’s increasing population proceeds. For environmental concerns, these buildings must also be energy efficient. One component of these tasks is the provision of indoor air quality and thermal comfort, which can be enhanced with natural ventilation by the supply of fresh air. Working spaces can only be naturally ventilated with connections to the outdoors utilizing operable windows, double facades, ventilation stacks, balconies, patios, terraces and skygardens. Large amounts of fresh air can be provided to the indoor spaces without mechanical air-conditioning systems, which are widely employed in contemporary tall buildings. This paper tends to present the concept of natural ventilation for sustainable tall office buildings in order to achieve healthy and comfortable working spaces, as well as energy efficient environments. Initially the historical evolution of ventilation strategies for tall buildings is presented, beginning with natural ventilation and continuing with the introduction of mechanical airconditioning systems. Then the emergence of natural ventilation due to the health and environmental concerns in tall buildings is handled, and the strategies for implementing this strategy are revealed. In the next section, a number of case studies that utilize this strategy are investigated. Finally, how tall office buildings can benefit from this strategy is discussed.

Keywords: Tall office building, natural ventilation, energy efficiency, double-skin façade, stack ventilation, air conditioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7883
468 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study

Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker

Abstract:

In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.

Keywords: Admissions, algorithms, cloud computing, differentiation, fog computing, leveling, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 725
467 A Nodal Transmission Pricing Model based on Newly Developed Expressions of Real and Reactive Power Marginal Prices in Competitive Electricity Markets

Authors: Ashish Saini, A.K. Saxena

Abstract:

In competitive electricity markets all over the world, an adoption of suitable transmission pricing model is a problem as transmission segment still operates as a monopoly. Transmission pricing is an important tool to promote investment for various transmission services in order to provide economic, secure and reliable electricity to bulk and retail customers. The nodal pricing based on SRMC (Short Run Marginal Cost) is found extremely useful by researchers for sending correct economic signals. The marginal prices must be determined as a part of solution to optimization problem i.e. to maximize the social welfare. The need to maximize the social welfare subject to number of system operational constraints is a major challenge from computation and societal point of views. The purpose of this paper is to present a nodal transmission pricing model based on SRMC by developing new mathematical expressions of real and reactive power marginal prices using GA-Fuzzy based optimal power flow framework. The impacts of selecting different social welfare functions on power marginal prices are analyzed and verified with results reported in literature. Network revenues for two different power systems are determined using expressions derived for real and reactive power marginal prices in this paper.

Keywords: Deregulation, electricity markets, nodal pricing, social welfare function, short run marginal cost.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
466 A Comparative Study of Rigid and Modified Simplex Methods for Optimal Parameter Settings of ACO for Noisy Non-Linear Surfaces

Authors: Seksan Chunothaisawat, Pongchanun Luangpaiboon

Abstract:

There are two common types of operational research techniques, optimisation and metaheuristic methods. The latter may be defined as a sequential process that intelligently performs the exploration and exploitation adopted by natural intelligence and strong inspiration to form several iterative searches. An aim is to effectively determine near optimal solutions in a solution space. In this work, a type of metaheuristics called Ant Colonies Optimisation, ACO, inspired by a foraging behaviour of ants was adapted to find optimal solutions of eight non-linear continuous mathematical models. Under a consideration of a solution space in a specified region on each model, sub-solutions may contain global or multiple local optimum. Moreover, the algorithm has several common parameters; number of ants, moves, and iterations, which act as the algorithm-s driver. A series of computational experiments for initialising parameters were conducted through methods of Rigid Simplex, RS, and Modified Simplex, MSM. Experimental results were analysed in terms of the best so far solutions, mean and standard deviation. Finally, they stated a recommendation of proper level settings of ACO parameters for all eight functions. These parameter settings can be applied as a guideline for future uses of ACO. This is to promote an ease of use of ACO in real industrial processes. It was found that the results obtained from MSM were pretty similar to those gained from RS. However, if these results with noise standard deviations of 1 and 3 are compared, MSM will reach optimal solutions more efficiently than RS, in terms of speed of convergence.

Keywords: Ant colony optimisation, metaheuristics, modified simplex, non-linear, rigid simplex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
465 Utilizing Ontologies Using Ontology Editor for Creating Initial Unified Modeling Language (UML)Object Model

Authors: Waralak Vongdoiwang Siricharoen

Abstract:

One of object oriented software developing problem is the difficulty of searching the appropriate and suitable objects for starting the system. In this work, ontologies appear in the part of supporting the object discovering in the initial of object oriented software developing. There are many researches try to demonstrate that there is a great potential between object model and ontologies. Constructing ontology from object model is called ontology engineering can be done; On the other hand, this research is aiming to support the idea of building object model from ontology is also promising and practical. Ontology classes are available online in any specific areas, which can be searched by semantic search engine. There are also many helping tools to do so; one of them which are used in this research is Protégé ontology editor and Visual Paradigm. To put them together give a great outcome. This research will be shown how it works efficiently with the real case study by using ontology classes in travel/tourism domain area. It needs to combine classes, properties, and relationships from more than two ontologies in order to generate the object model. In this paper presents a simple methodology framework which explains the process of discovering objects. The results show that this framework has great value while there is possible for expansion. Reusing of existing ontologies offers a much cheaper alternative than building new ones from scratch. More ontologies are becoming available on the web, and online ontologies libraries for storing and indexing ontologies are increasing in number and demand. Semantic and Ontologies search engines have also started to appear, to facilitate search and retrieval of online ontologies.

Keywords: Software Developing, Ontology, Ontology Library, Artificial Intelligent, Protégé, Object Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1878
464 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: Adaptive sampling, batch bulk methyl methacrylate polymerization, large margin nearest neighbor regression, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
463 Performance Based Design of Masonry Infilled Reinforced Concrete Frames for Near-Field Earthquakes Using Energy Methods

Authors: Alok Madan, Arshad K. Hashmi

Abstract:

Performance based design (PBD) is an iterative exercise in which a preliminary trial design of the building structure is selected and if the selected trial design of the building structure does not conform to the desired performance objective, the trial design is revised. In this context, development of a fundamental approach for performance based seismic design of masonry infilled frames with minimum number of trials is an important objective. The paper presents a plastic design procedure based on the energy balance concept for PBD of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames subjected to near-field earthquakes. The proposed energy based plastic design procedure was implemented for trial performance based seismic design of representative masonry infilled reinforced concrete frames with various practically relevant distributions of masonry infill panels over the frame elevation. Non-linear dynamic analyses of the trial PBD of masonry infilled R/C frames was performed under the action of near-field earthquake ground motions. The results of non-linear dynamic analyses demonstrate that the proposed energy method is effective for performance based design of masonry infilled R/C frames under near-field as well as far-field earthquakes.

Keywords: Masonry Infilled Frame, Energy Methods, Near-fault Ground Motions, Pushover Analysis, Nonlinear Dynamic Analysis, Seismic Demand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2791
462 Micromechanics Modeling of 3D Network Smart Orthotropic Structures

Authors: E. M. Hassan, A. L. Kalamkarov

Abstract:

Two micromechanical models for 3D smart composite with embedded periodic or nearly periodic network of generally orthotropic reinforcements and actuators are developed and applied to cubic structures with unidirectional orientation of constituents. Analytical formulas for the effective piezothermoelastic coefficients are derived using the Asymptotic Homogenization Method (AHM). Finite Element Analysis (FEA) is subsequently developed and used to examine the aforementioned periodic 3D network reinforced smart structures. The deformation responses from the FE simulations are used to extract effective coefficients. The results from both techniques are compared. This work considers piezoelectric materials that respond linearly to changes in electric field, electric displacement, mechanical stress and strain and thermal effects. This combination of electric fields and thermo-mechanical response in smart composite structures is characterized by piezoelectric and thermal expansion coefficients. The problem is represented by unitcell and the models are developed using the AHM and the FEA to determine the effective piezoelectric and thermal expansion coefficients. Each unit cell contains a number of orthotropic inclusions in the form of structural reinforcements and actuators. Using matrix representation of the coupled response of the unit cell, the effective piezoelectric and thermal expansion coefficients are calculated and compared with results of the asymptotic homogenization method. A very good agreement is shown between these two approaches.

Keywords: Asymptotic Homogenization Method, Effective Piezothermoelastic Coefficients, Finite Element Analysis, 3D Smart Network Composite Structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2099
461 The Applications of Four Fingers Theory: The Proof of 66 Acupoints under the Human Elbow and Knee

Authors: Chih-I. Tsai, Yu-Chien. Lin

Abstract:

Through experiences of clinical practices, it is discovered that locations on the body at a level of four fingerbreadth above and below the joints are the points at which muscles connect to tendons, and since the muscles and tendons possess opposite characteristics, muscles are full of blood but lack qi, while tendons are full of qi but lack blood, these points on our body become easily blocked. It is proposed that through doing acupuncture or creating localized pressure to the areas four fingerbreadths above and below our joints, with an elastic bandage, we could help the energy, also known as qi, to flow smoothly in our body and further improve our health. Based on the Four Fingers Theory, we understand that human height is 22 four fingerbreadths. In addition, qi and blood travel through 24 meridians, 50 times each day, and they flow through 6 cun with every human breath. We can also understand the average number of human heartbeats is 75 times per minute. And the function of qi-blood circulation system in Traditional Chinese Medicine is the same as the blood circulation in Western Medical Science. Informed by Four Fingers Theory, this study further examined its applications in acupuncture practices. The research question is how Four Fingers Theory proves what has been mentioned in Nei Jing that there are 66 acupoints under a human’s elbow and knee. In responding to the research question, there are 66 acupoints under a human’s elbow and knee. Four Fingers Theory facilitated the creation of the acupuncture naming and teaching system. It is expected to serve as an approachable and effective way to deliver knowledge of acupuncture to the public worldwide.

Keywords: Four Fingers theory, Meridians circulation, 66 Acupoints under a human’s elbow and knee, acupuncture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596