Search results for: coordinate modal assurance criterion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1557

Search results for: coordinate modal assurance criterion

1287 A Techno-Economic Simulation Model to Reveal the Relevance of Construction Process Impact Factors for External Thermal Insulation Composite System (ETICS)

Authors: Virgo Sulakatko

Abstract:

The reduction of energy consumption of the built environment has been one of the topics tackled by European Commission during the last decade. Increased energy efficiency requirements have increased the renovation rate of apartment buildings covered with External Thermal Insulation Composite System (ETICS). Due to fast and optimized application process, a large extent of quality assurance is depending on the specific activities of artisans and are often not controlled. The on-site degradation factors (DF) have the technical influence to the façade and cause future costs to the owner. Besides the thermal conductivity, the building envelope needs to ensure the mechanical resistance and stability, fire-, noise-, corrosion and weather protection, and long-term durability. As the shortcomings of the construction phase become problematic after some years, the common value of the renovation is reduced. Previous work on the subject has identified and rated the relevance of DF to the technical requirements and developed a method to reveal the economic value of repair works. The future costs can be traded off to increased the quality assurance during the construction process. The proposed framework is describing the joint simulation of the technical importance and economic value of the on-site DFs of ETICS. The model is providing new knowledge to improve the resource allocation during the construction process by enabling to identify and diminish the most relevant degradation factors and increase economic value to the owner.

Keywords: ETICS, construction technology, construction management, life cycle costing

Procedia PDF Downloads 419
1286 Sustainability as a Criterion in the Reconstruction of Libya’s Public Transport Infrastructure

Authors: Haitam Emhemad, Brian Agnew, David Greenwood

Abstract:

Amongst the many priorities facing Libya following the 2011 uprising is the provision of a transport infrastructure that will meet the nation’s needs and not undermine its prospects for economic prosperity as with many developing economies non-technical issues such as management, planning and financing are the major barriers to the efficient and effective provision of transport infrastructure. This is particularly true in the case of the effective incorporation of sustainability criteria, and the research upon which this paper is based involves the examination of alternative ways of approaching this problem. It is probably fair to say that criteria that relate to sustainability have not, historically, featured strongly in Libya’s approach to the development of its transport infrastructure. However, the current reappraisal of how best to redevelop the country’s transport infrastructure that has been afforded by recent events may offer the opportunity to alter this. The research examines recent case studies from a number of countries to explore ways in which sustainability has been included as a criterion for planning and procurement decisions. There will also be an in-depth investigation into the Libyan planning and legislative context to examine the feasibility of the introduction of such sustainability criteria into the process of planning and procurement of Libya’s transport infrastructure.

Keywords: Libya reconstruction, sustainability criteria, transport infrastructure, public transport

Procedia PDF Downloads 343
1285 Transformation of Periodic Fuzzy Membership Function to Discrete Polygon on Circular Polar Coordinates

Authors: Takashi Mitsuishi

Abstract:

Fuzzy logic has gained acceptance in the recent years in the fields of social sciences and humanities such as psychology and linguistics because it can manage the fuzziness of words and human subjectivity in a logical manner. However, the major field of application of the fuzzy logic is control engineering as it is a part of the set theory and mathematical logic. Mamdani method, which is the most popular technique for approximate reasoning in the field of fuzzy control, is one of the ways to numerically represent the control afforded by human language and sensitivity and has been applied in various practical control plants. Fuzzy logic has been gradually developing as an artificial intelligence in different applications such as neural networks, expert systems, and operations research. The objects of inference vary for different application fields. Some of these include time, angle, color, symptom and medical condition whose fuzzy membership function is a periodic function. In the defuzzification stage, the domain of the membership function should be unique to obtain uniqueness its defuzzified value. However, if the domain of the periodic membership function is determined as unique, an unintuitive defuzzified value may be obtained as the inference result using the center of gravity method. Therefore, the authors propose a method of circular-polar-coordinates transformation and defuzzification of the periodic membership functions in this study. The transformation to circular polar coordinates simplifies the domain of the periodic membership function. Defuzzified value in circular polar coordinates is an argument. Furthermore, it is required that the argument is calculated from a closed plane figure which is a periodic membership function on the circular polar coordinates. If the closed plane figure is continuous with the continuity of the membership function, a significant amount of computation is required. Therefore, to simplify the practice example and significantly reduce the computational complexity, we have discretized the continuous interval and the membership function in this study. In this study, the following three methods are proposed to decide the argument from the discrete polygon which the continuous plane figure is transformed into. The first method provides an argument of a straight line passing through the origin and through the coordinate of the arithmetic mean of each coordinate of the polygon (physical center of gravity). The second one provides an argument of a straight line passing through the origin and the coordinate of the geometric center of gravity of the polygon. The third one provides an argument of a straight line passing through the origin bisecting the perimeter of the polygon (or the closed continuous plane figure).

Keywords: defuzzification, fuzzy membership function, periodic function, polar coordinates transformation

Procedia PDF Downloads 363
1284 The application of Gel Dosimeters and Comparison with other Dosimeters in Radiotherapy: A Literature Review

Authors: Sujan Mahamud

Abstract:

Purpose: A major challenge in radiotherapy treatment is to deliver precise dose of radiation to the tumor with minimum dose to the healthy normal tissues. Recently, gel dosimetry has emerged as a powerful tool to measure three-dimensional (3D) dose distribution for complex delivery verification and quality assurance. These dosimeters act both as a phantom and detector, thus confirming the versatility of dosimetry technique. The aim of the study is to know the application of Gel Dosimeters in Radiotherapy and find out the comparison with 1D and 2D dimensional dosimeters. Methods and Materials: The study is carried out from Gel Dosimeter literatures. Secondary data and images have been collected from different sources such as different guidelines, books, and internet, etc. Result: Analyzing, verifying, and comparing data from treatment planning system (TPS) is determined that gel dosimeter is a very excellent powerful tool to measure three-dimensional (3D) dose distribution. The TPS calculated data were in very good agreement with the dose distribution measured by the ferrous gel. The overall uncertainty in the ferrous-gel dose determination was considerably reduced using an optimized MRI acquisition protocol and a new MRI scanner. The method developed for comparing measuring gel data with calculated treatment plans, the gel dosimetry method, was proven to be a useful for radiation treatment planning verification. In 1D and 2D Film, the depth dose and lateral for RMSD are 1.8% and 2%, and max (Di-Dj) are 2.5% and 8%. Other side 2D+ ( 3D) Film Gel and Plan Gel for RMSDstruct and RMSDstoch are 2.3% & 3.6% and 1% & 1% and system deviation are -0.6% and 2.5%. The study is investigated that the result fined 2D+ (3D) Film Dosimeter is better than the 1D and 2D Dosimeter. Discussion: Gel Dosimeters is quality control and quality assurance tool which will used the future clinical application.

Keywords: gel dosimeters, phantom, rmsd, QC, detector

Procedia PDF Downloads 151
1283 Prevalence of Anaemia Amongst Antenatal Clinic Attendees at Booking: A Nigerian Study

Authors: S Eli, DGB Kalio, BOA Altraide, P Kua, DA MacPepple, FE Okonofua

Abstract:

Background: Anaemia in pregnancy is worrisome morbidity encountered by obstetricians and gynaecologist in the developing countries of the world. It is an indirect cause of maternal mortality and also a cause of perinatal mortality. Aim: The study aimed to ascertain the prevalence of anaemia amongst antenatal clinic (ANC) attendees at booking at The Rivers State University Teaching Hospital (RSUTH), Port Harcourt, Rivers State, Nigeria. Method: The method was a cross-sectional study of ANC attendees at booking at RSUTH. The cut-off for anaemia by the WHO used for this study was packed cell volume (PCV) less than 33%. Simple randomized sampling method was used. Information was analyzed using SPSS version 25. Result: A total of 500 questionnaires were distributed, and 488 questionnaires retrieved. The mean age was of the ANC attendees was 31.44 years, and the modal parity was 0. Three hundred and fifty-seven (73.2%) of the respondents had a tertiary level of education, 126(25.8%) had a secondary level of education while 5 (1%) of the respondents had a primary level of education. Five (1%) of the respondents did not volunteer their educational status. The modal packed cell volume was 32%. Three hundred and eighty-two (78.3%) of the ANC attendees had PCV level less than 33% compared to 106 (21.7%) who had PCV equal or greater than 33%. Conclusion: The study revealed that the prevalence of anaemia in pregnancy amongst ANC attendees at the RSUTH was high, representing 73.3% of the subjects. Anaemia was common amongst multiparas (38.5%). Malaria prophylaxis, as well as encouraging pregnant women to be compliant with their routine antenatal drugs as well as counseling on the right diet, cannot be overemphasized during pregnancy. In addition, women should use family planning for child spacing for them to recover from previous pregnancies.

Keywords: anaemia, ANC attendees, Nigeria, prevalence

Procedia PDF Downloads 120
1282 Time Effective Structural Frequency Response Testing with Oblique Impact

Authors: Khoo Shin Yee, Lian Yee Cheng, Ong Zhi Chao, Zubaidah Ismail, Siamak Noroozi

Abstract:

Structural frequency response testing is accurate in identifying the dynamic characteristic of a machinery structure. In practical perspective, conventional structural frequency response testing such as experimental modal analysis with impulse technique (also known as “impulse testing”) has limitation especially on its long acquisition time. The high acquisition time is mainly due to the redundancy procedure where the engineer has to repeatedly perform the test in 3 directions, namely the axial-, horizontal- and vertical-axis, in order to comprehensively define the dynamic behavior of a 3D structure. This is unfavorable to numerous industries where the downtime cost is high. This study proposes to reduce the testing time by using oblique impact. Theoretically, a single oblique impact can induce significant vibration responses and vibration modes in all the 3 directions. Hence, the acquisition time with the implementation of the oblique impulse technique can be reduced by a factor of three (i.e. for a 3D dynamic system). This study initiates an experimental investigation of impulse testing with oblique excitation. A motor-driven test rig has been used for the testing purpose. Its dynamic characteristic has been identified using the impulse testing with the conventional normal impact and the proposed oblique impact respectively. The results show that the proposed oblique impulse testing is able to obtain all the desired natural frequencies in all 3 directions and thus providing a feasible solution for a fast and time effective way of conducting the impulse testing.

Keywords: frequency response function, impact testing, modal analysis, oblique angle, oblique impact

Procedia PDF Downloads 501
1281 Investigation of Ductile Failure Mechanisms in SA508 Grade 3 Steel via X-Ray Computed Tomography and Fractography Analysis

Authors: Suleyman Karabal, Timothy L. Burnett, Egemen Avcu, Andrew H. Sherry, Philip J. Withers

Abstract:

SA508 Grade 3 steel is widely used in the construction of nuclear pressure vessels, where its fracture toughness plays a critical role in ensuring operational safety and reliability. Understanding the ductile failure mechanisms in this steel grade is crucial for designing robust pressure vessels that can withstand severe nuclear environment conditions. In the present study, round bar specimens of SA508 Grade 3 steel with four distinct notch geometries were subjected to tensile loading while capturing continuous 2D images at 5-second intervals in order to monitor any alterations in their geometries to construct true stress-strain curves of the specimens. 3D reconstructions of X-ray computed tomography (CT) images at high-resolution (a spatial resolution of 0.82 μm) allowed for a comprehensive assessment of the influences of second-phase particles (i.e., manganese sulfide inclusions and cementite particles) on ductile failure initiation as a function of applied plastic strain. Additionally, based on 2D and 3D images, plasticity modeling was executed, and the results were compared to experimental data. A specific ‘two-parameter criterion’ was established and calibrated based on the correlation between stress triaxiality and equivalent plastic strain at failure initiation. The proposed criterion demonstrated substantial agreement with the experimental results, thus enhancing our knowledge of ductile fracture behavior in this steel grade. The implementation of X-ray CT and fractography analysis provided new insights into the diverse roles played by different populations of second-phase particles in fracture initiation under varying stress triaxiality conditions.

Keywords: ductile fracture, two-parameter criterion, x-ray computed tomography, stress triaxiality

Procedia PDF Downloads 92
1280 TARF: Web Toolkit for Annotating RNA-Related Genomic Features

Authors: Jialin Ma, Jia Meng

Abstract:

Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.

Keywords: RNA-related genomic features, annotation, visualization, web server

Procedia PDF Downloads 207
1279 Periodic Topology and Size Optimization Design of Tower Crane Boom

Authors: Wu Qinglong, Zhou Qicai, Xiong Xiaolei, Zhang Richeng

Abstract:

In order to achieve the layout and size optimization of the web members of tower crane boom, a truss topology and cross section size optimization method based on continuum is proposed considering three typical working conditions. Firstly, the optimization model is established by replacing web members with web plates. And the web plates are divided into several sub-domains so that periodic soft kill option (SKO) method can be carried out for topology optimization of the slender boom. After getting the optimized topology of web plates, the optimized layout of web members is formed through extracting the principal stress distribution. Finally, using the web member radius as design variable, the boom compliance as objective and the material volume of the boom as constraint, the cross section size optimization mathematical model is established. The size optimization criterion is deduced from the mathematical model by Lagrange multiplier method and Kuhn-Tucker condition. By comparing the original boom with the optimal boom, it is identified that this optimization method can effectively lighten the boom and improve its performance.

Keywords: tower crane boom, topology optimization, size optimization, periodic, SKO, optimization criterion

Procedia PDF Downloads 554
1278 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range

Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva

Abstract:

Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.

Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability

Procedia PDF Downloads 155
1277 The Influence of Service Quality on Customer Satisfaction and Customer Loyalty at a Telecommunication Company in Malaysia

Authors: Noor Azlina Mohamed Yunus, Baharom Abd Rahman, Abdul Kadir Othman, Narehan Hassan, Rohana Mat Som, Ibhrahim Zakaria

Abstract:

Customer satisfaction and customer loyalty are the most important outcomes of marketing in which both elements serve various stages of consumer buying behavior. Excellent service quality has become a major corporate goal as more companies gradually struggle for quality for their products and services. Therefore, the main purpose of this study is to investigate the influence of service quality on customer satisfaction and customer loyalty at one telecommunication company in Malaysia which is Telekom Malaysia. The scope of this research is to evaluate satisfaction on the products or services at TMpoint Bukit Raja, Malaysia. The data are gathered through the distribution of questionnaires to a total of 306 respondents who visited and used the products or services. By using correlation and multiple regression analyses, the result revealed that there was a positive and significant relationship between service quality and customer satisfaction. The most influential factor on customer satisfaction was empathy followed by reliability, assurance and tangibles. However, there was no significant influence between responsiveness and customer satisfaction. The result also showed there was a positive and significant relationship between service quality and customer loyalty. The most influential factor on customer loyalty was assurance followed by reliability and tangibles. TMpoint Bukit Raja is recommended to device excellent strategies to satisfy customers’ needs and to adopt action-oriented approach by focusing on what the customers wanted. It is also recommended that similar study can be carried out in other industries using different methodologies such as longitudinal method, enlarge the sample size and use a qualitative approach.

Keywords: customer satisfaction, customer loyalty, service quality, telecommunication company

Procedia PDF Downloads 453
1276 Feature Extraction of MFCC Based on Fisher-Ratio and Correlated Distance Criterion for Underwater Target Signal

Authors: Han Xue, Zhang Lanyue

Abstract:

In order to seek more effective feature extraction technology, feature extraction method based on MFCC combined with vector hydrophone is exposed in the paper. The sound pressure signal and particle velocity signal of two kinds of ships are extracted by using MFCC and its evolution form, and the extracted features are fused by using fisher-ratio and correlated distance criterion. The features are then identified by BP neural network. The results showed that MFCC, First-Order Differential MFCC and Second-Order Differential MFCC features can be used as effective features for recognition of underwater targets, and the fusion feature can improve the recognition rate. Moreover, the results also showed that the recognition rate of the particle velocity signal is higher than that of the sound pressure signal, and it reflects the superiority of vector signal processing.

Keywords: vector information, MFCC, differential MFCC, fusion feature, BP neural network

Procedia PDF Downloads 529
1275 Listening to Circles, Playing Lights: A Study of Cross-Modal Perception in Music

Authors: Roni Granot, Erica Polini

Abstract:

Music is often described in terms of non-auditory adjectives such as a rising melody, a bright sound, or a zigzagged contour. Such cross modal associations have been studied with simple isolated musical parameters, but only rarely in rich musical contexts. The current study probes cross sensory associations with polarity based dimensions by means of pairings of 10 adjectives: blunt-sharp, relaxed-tense, heavy-light, low (in space)-high, low (pitch)-high, big-small, hard-soft, active-passive, bright-dark, sad-happy. 30 participants (randomly assigned to one of two groups) were asked to rate one of 27 short saxophone improvisations on a 1 to 6 scale where 1 and six correspond to the opposite pole of each dimension. The 27 improvisations included three exemplars for each of three dimensions (size, brightness, sharpness), played by three different players. Here we focus on the question of whether ratings of scales corresponding with the musical dimension were consistently rated as such (e.g. music improvised to represent a white circle rated as bright in contrast with music improvised to represent a dark circle rated as dark). Overall the average scores by dimension showed an upward trend in the equivalent verbal scale, with a low rating for small, bright and sharp musical improvisations and higher scores for large, dark and blunt improvisations. Friedman tests indicate a statistically significant difference for brightness (χ2 (2) = 19.704, p = .000) and sharpness dimensions (χ2 (2) = 15.750, p = .000), but not for size (χ2 (2) = 1.444, p = .486). Post hoc analysis with Wilcoxon signed-rank tests within the brightness dimension, show significant differences among all possible parings resulted in significant differences: the rankings of 'bright' and 'dark' (Z = -3.310, p = .001), of 'bright' and 'medium' (Z = -2.438, p = .015) and of 'dark' and 'medium' music (Z = -2.714, p = .007); but only differences between the extreme contrasts within the sharpness dimension : 'sharp' and 'blunt' music (Z = -3.147, p = .002) and between 'sharp' and 'medium' music rated on the sharpness scale (Z = - 3.054, p = .002), but not between 'medium' and 'blunt' music (Z = -.982, p = .326). In summary our study suggests a privileged link between music and the perceptual and semantic domain of brightness. In contrast, size seems to be very difficult to convey in music, whereas sharpness seems to be mapped onto the two extremes (sharp vs. blunt) rather than continuously. This is nicely reflected in the musical literature in titles and texts which stress the association between music and concepts of light or darkness rather than sharpness or size.

Keywords: audiovisual, brightness, cross-modal perception, cross-sensory correspondences, size, visual angularity

Procedia PDF Downloads 220
1274 Using Geo-Statistical Techniques and Machine Learning Algorithms to Model the Spatiotemporal Heterogeneity of Land Surface Temperature and its Relationship with Land Use Land Cover

Authors: Javed Mallick

Abstract:

In metropolitan areas, rapid changes in land use and land cover (LULC) have ecological and environmental consequences. Saudi Arabia's cities have experienced tremendous urban growth since the 1990s, resulting in urban heat islands, groundwater depletion, air pollution, loss of ecosystem services, and so on. From 1990 to 2020, this study examines the variance and heterogeneity in land surface temperature (LST) caused by LULC changes in Abha-Khamis Mushyet, Saudi Arabia. LULC was mapped using the support vector machine (SVM). The mono-window algorithm was used to calculate the land surface temperature (LST). To identify LST clusters, the local indicator of spatial associations (LISA) model was applied to spatiotemporal LST maps. In addition, the parallel coordinate (PCP) method was used to investigate the relationship between LST clusters and urban biophysical variables as a proxy for LULC. According to LULC maps, urban areas increased by more than 330% between 1990 and 2018. Between 1990 and 2018, built-up areas had an 83.6% transitional probability. Furthermore, between 1990 and 2020, vegetation and agricultural land were converted into built-up areas at a rate of 17.9% and 21.8%, respectively. Uneven LULC changes in built-up areas result in more LST hotspots. LST hotspots were associated with high NDBI but not NDWI or NDVI. This study could assist policymakers in developing mitigation strategies for urban heat islands

Keywords: land use land cover mapping, land surface temperature, support vector machine, LISA model, parallel coordinate plot

Procedia PDF Downloads 78
1273 Supply Chain Analysis with Product Returns: Pricing and Quality Decisions

Authors: Mingming Leng

Abstract:

Wal-Mart has allocated considerable human resources for its quality assurance program, in which the largest retailer serves its supply chains as a quality gatekeeper. Asda Stores Ltd., the second largest supermarket chain in Britain, is now investing £27m in significantly increasing the frequency of quality control checks in its supply chains and thus enhancing quality across its fresh food business. Moreover, Tesco, the largest British supermarket chain, already constructed a quality assessment center to carry out its gatekeeping responsibility. Motivated by the above practices, we consider a supply chain in which a retailer plays the gatekeeping role in quality assurance by identifying defects among a manufacturer's products prior to selling them to consumers. The impact of a retailer's gatekeeping activity on pricing and quality assurance in a supply chain has not been investigated in the operations management area. We draw a number of managerial insights that are expected to help practitioners judiciously consider the quality gatekeeping effort at the retail level. As in practice, when the retailer identifies a defective product, she immediately returns it to the manufacturer, who then replaces the defect with a good quality product and pays a penalty to the retailer. If the retailer does not recognize a defect but sells it to a consumer, then the consumer will identify the defect and return it to the retailer, who then passes the returned 'unidentified' defect to the manufacturer. The manufacturer also incurs a penalty cost. Accordingly, we analyze a two-stage pricing and quality decision problem, in which the manufacturer and the retailer bargain over the manufacturer's average defective rate and wholesale price at the first stage, and the retailer decides on her optimal retail price and gatekeeping intensity at the second stage. We also compare the results when the retailer performs quality gatekeeping with those when the retailer does not. Our supply chain analysis exposes some important managerial insights. For example, the retailer's quality gatekeeping can effectively reduce the channel-wide defective rate, if her penalty charge for each identified de-fect is larger than or equal to the market penalty for each unidentified defect. When the retailer imple-ments quality gatekeeping, the change in the negotiated wholesale price only depends on the manufac-turer's 'individual' benefit, and the change in the retailer's optimal retail price is only related to the channel-wide benefit. The retailer is willing to take on the quality gatekeeping responsibility, when the impact of quality relative to retail price on demand is high and/or the retailer has a strong bargaining power. We conclude that the retailer's quality gatekeeping can help reduce the defective rate for consumers, which becomes more significant when the retailer's bargaining position in her supply chain is stronger. Retailers with stronger bargaining powers can benefit more from their quality gatekeeping in supply chains.

Keywords: bargaining, game theory, pricing, quality, supply chain

Procedia PDF Downloads 277
1272 Nonlinear Vibration of FGM Plates Subjected to Acoustic Load in Thermal Environment Using Finite Element Modal Reduction Method

Authors: Hassan Parandvar, Mehrdad Farid

Abstract:

In this paper, a finite element modeling is presented for large amplitude vibration of functionally graded material (FGM) plates subjected to combined random pressure and thermal load. The material properties of the plates are assumed to vary continuously in the thickness direction by a simple power law distribution in terms of the volume fractions of the constituents. The material properties depend on the temperature whose distribution along the thickness can be expressed explicitly. The von Karman large deflection strain displacement and extended Hamilton's principle are used to obtain the governing system of equations of motion in structural node degrees of freedom (DOF) using finite element method. Three-node triangular Mindlin plate element with shear correction factor is used. The nonlinear equations of motion in structural degrees of freedom are reduced by using modal reduction method. The reduced equations of motion are solved numerically by 4th order Runge-Kutta scheme. In this study, the random pressure is generated using Monte Carlo method. The modeling is verified and the nonlinear dynamic response of FGM plates is studied for various values of volume fraction and sound pressure level under different thermal loads. Snap-through type behavior of FGM plates is studied too.

Keywords: nonlinear vibration, finite element method, functionally graded material (FGM) plates, snap-through, random vibration, thermal effect

Procedia PDF Downloads 262
1271 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software

Authors: Junior Akunzi

Abstract:

In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.

Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance

Procedia PDF Downloads 216
1270 The Higher Education Accreditation Foreign Experience for Ukraine

Authors: Dmytro Symak

Abstract:

The experience in other countries shows that, the role of accreditation of higher education as one of the types of quality assurance process for providing educational services increases. This was the experience of highly developed countries such as USA, Canada, France, Germany, because without proper quality assurance process is impossible to achieve a successful future of the nation and the state. In most countries, the function of Higher Education Accreditation performs public authorities, in particular, such as the Ministry of Education. In the US, however, the quality assurance process is independent on the government and implemented by private non-governmental organization - the Council of Higher Education Accreditation. In France, the main body that carries out accreditation of higher education is the Ministry of National Education. As part of the Bologna process is the mutual recognition and accreditation of degrees. While higher education institutions issue diplomas, but the ministry could award the title. This is the main level of accreditation awarded automatically by state universities. In total, there are in France next major level of accreditation of higher education: - accreditation for a visa: Accreditation second level; - recognition of accreditation: accreditation of third level. In some areas of education to accreditation ministry should adopt formal recommendations on specific organs. But there are also some exceptions. Thus, the French educational institutions, mainly large Business School, looking for non-French accreditation. These include, for example, the Association to Advance Collegiate Schools of Business, the Association of MBAs, the European Foundation for Management Development, the European Quality Improvement System, a prestigious EFMD Programme accreditation system. Noteworthy also German accreditation system of education. The primary here is a Conference of Ministers of Education and Culture of land in the Federal Republic of Germany (Kultusministerkonferenz or CCM) was established in 1948 by agreement between the States of the Federal Republic of Germany. Among its main responsibilities is to ensure quality and continuity of development in higher education. In Germany, the program of bachelors and masters must be accredited in accordance with Resolution Kultusministerkonerenz. In Ukraine Higher Education Accreditation carried out the Ministry of Education, Youth and Sports of Ukraine under four main levels. Ukraine's legislation on higher education based on the Constitution Ukraine consists of the laws of Ukraine ‘On osvititu’ ‘On scientific and technical activity’, ‘On Higher osvititu’ and other legal acts and is entirely within the competence of the state. This leads to considerable centralization and bureaucratization of the process. Thus, analysis of expertise shined can conclude that reforming the system of accreditation and quality of higher education in Ukraine to its integration into the global space requires solving a number of problems in the following areas: improving the system of state certification and licensing; optimizing the network of higher education institutions; creating both governmental and non-governmental organizations to monitor the process of higher education in Ukraine and so on.

Keywords: higher education, accreditation, decentralization, education institutions

Procedia PDF Downloads 337
1269 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 63
1268 Discriminating Between Energy Drinks and Sports Drinks Based on Their Chemical Properties Using Chemometric Methods

Authors: Robert Cazar, Nathaly Maza

Abstract:

Energy drinks and sports drinks are quite popular among young adults and teenagers worldwide. Some concerns regarding their health effects – particularly those of the energy drinks - have been raised based on scientific findings. Differentiating between these two types of drinks by means of their chemical properties seems to be an instructive task. Chemometrics provides the most appropriate strategy to do so. In this study, a discrimination analysis of the energy and sports drinks has been carried out applying chemometric methods. A set of eleven samples of available commercial brands of drinks – seven energy drinks and four sports drinks – were collected. Each sample was characterized by eight chemical variables (carbohydrates, energy, sugar, sodium, pH, degrees Brix, density, and citric acid). The data set was standardized and examined by exploratory chemometric techniques such as clustering and principal component analysis. As a preliminary step, a variable selection was carried out by inspecting the variable correlation matrix. It was detected that some variables are redundant, so they can be safely removed, leaving only five variables that are sufficient for this analysis. They are sugar, sodium, pH, density, and citric acid. Then, a hierarchical clustering `employing the average – linkage criterion and using the Euclidian distance metrics was performed. It perfectly separates the two types of drinks since the resultant dendogram, cut at the 25% similarity level, assorts the samples in two well defined groups, one of them containing the energy drinks and the other one the sports drinks. Further assurance of the complete discrimination is provided by the principal component analysis. The projection of the data set on the first two principal components – which retain the 71% of the data information – permits to visualize the distribution of the samples in the two groups identified in the clustering stage. Since the first principal component is the discriminating one, the inspection of its loadings consents to characterize such groups. The energy drinks group possesses medium to high values of density, citric acid, and sugar. The sports drinks group, on the other hand, exhibits low values of those variables. In conclusion, the application of chemometric methods on a data set that features some chemical properties of a number of energy and sports drinks provides an accurate, dependable way to discriminate between these two types of beverages.

Keywords: chemometrics, clustering, energy drinks, principal component analysis, sports drinks

Procedia PDF Downloads 108
1267 An Improved K-Means Algorithm for Gene Expression Data Clustering

Authors: Billel Kenidra, Mohamed Benmohammed

Abstract:

Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.

Keywords: microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization

Procedia PDF Downloads 190
1266 Analysis of Seismic Waves Generated by Blasting Operations and their Response on Buildings

Authors: S. Ziaran, M. Musil, M. Cekan, O. Chlebo

Abstract:

The paper analyzes the response of buildings and industrially structures on seismic waves (low frequency mechanical vibration) generated by blasting operations. The principles of seismic analysis can be applied for different kinds of excitation such as: earthquakes, wind, explosions, random excitation from local transportation, periodic excitation from large rotating and/or machines with reciprocating motion, metal forming processes such as forging, shearing and stamping, chemical reactions, construction and earth moving work, and other strong deterministic and random energy sources caused by human activities. The article deals with the response of seismic, low frequency, mechanical vibrations generated by nearby blasting operations on a residential home. The goal was to determine the fundamental natural frequencies of the measured structure; therefore it is important to determine the resonant frequencies to design a suitable modal damping. The article also analyzes the package of seismic waves generated by blasting (Primary waves – P-waves and Secondary waves S-waves) and investigated the transfer regions. For the detection of seismic waves resulting from an explosion, the Fast Fourier Transform (FFT) and modal analysis, in the frequency domain, is used and the signal was acquired and analyzed also in the time domain. In the conclusions the measured results of seismic waves caused by blasting in a nearby quarry and its effect on a nearby structure (house) is analyzed. The response on the house, including the fundamental natural frequency and possible fatigue damage is also assessed.

Keywords: building structure, seismic waves, spectral analysis, structural response

Procedia PDF Downloads 400
1265 Accuracy/Precision Evaluation of Excalibur I: A Neurosurgery-Specific Haptic Hand Controller

Authors: Hamidreza Hoshyarmanesh, Benjamin Durante, Alex Irwin, Sanju Lama, Kourosh Zareinia, Garnette R. Sutherland

Abstract:

This study reports on a proposed method to evaluate the accuracy and precision of Excalibur I, a neurosurgery-specific haptic hand controller, designed and developed at Project neuroArm. Having an efficient and successful robot-assisted telesurgery is considerably contingent on how accurate and precise a haptic hand controller (master/local robot) would be able to interpret the kinematic indices of motion, i.e., position and orientation, from the surgeon’s upper limp to the slave/remote robot. A proposed test rig is designed and manufactured according to standard ASTM F2554-10 to determine the accuracy and precision range of Excalibur I at four different locations within its workspace: central workspace, extreme forward, far left and far right. The test rig is metrologically characterized by a coordinate measuring machine (accuracy and repeatability < ± 5 µm). Only the serial linkage of the haptic device is examined due to the use of the Structural Length Index (SLI). The results indicate that accuracy decreases by moving from the workspace central area towards the borders of the workspace. In a comparative study, Excalibur I performs on par with the PHANToM PremiumTM 3.0 and more accurate/precise than the PHANToM PremiumTM 1.5. The error in Cartesian coordinate system shows a dominant component in one direction (δx, δy or δz) for the movements on horizontal, vertical and inclined surfaces. The average error magnitude of three attempts is recorded, considering all three error components. This research is the first promising step to quantify the kinematic performance of Excalibur I.

Keywords: accuracy, advanced metrology, hand controller, precision, robot-assisted surgery, tele-operation, workspace

Procedia PDF Downloads 336
1264 An In-Depth Experimental Study of Wax Deposition in Pipelines

Authors: Arias M. L., D’Adamo J., Novosad M. N., Raffo P. A., Burbridge H. P., Artana G.

Abstract:

Shale oils are highly paraffinic and, consequently, can create wax deposits that foul pipelines during transportation. Several factors must be considered when designing pipelines or treatment programs that prevents wax deposition: including chemical species in crude oils, flowrates, pipes diameters and temperature. This paper describes the wax deposition study carried out within the framework of Y-TEC's flow assurance projects, as part of the process to achieve a better understanding on wax deposition issues. Laboratory experiments were performed on a medium size, 1 inch diameter, wax deposition loop of 15 mts long equipped with a solid detector system, online microscope to visualize crystals, temperature and pressure sensors along the loop pipe. A baseline test was performed with diesel with no paraffin or additive content. Tests were undertaken with different temperatures of circulating and cooling fluid at different flow conditions. Then, a solution formed with a paraffin added to the diesel was considered. Tests varying flowrate and cooling rate were again run. Viscosity, density, WAT (Wax Appearance Temperature) with DSC (Differential Scanning Calorimetry), pour point and cold finger measurements were carried out to determine physical properties of the working fluids. The results obtained in the loop were analyzed through momentum balance and heat transfer models. To determine possible paraffin deposition scenarios temperature and pressure loop output signals were studied. They were compared with WAT static laboratory methods. Finally, we scrutinized the effect of adding a chemical inhibitor to the working fluid on the dynamics of the process of wax deposition in the loop.

Keywords: paraffin desposition, flow assurance, chemical inhibitors, flow loop

Procedia PDF Downloads 105
1263 Risk Measurement and Management Strategies in Poultry Farm Enterprises in Imo State, Nigeria

Authors: Donatus Otuiheoma Ohajianya, Augusta Onyekachi Unamba

Abstract:

This study analyzed risk among poultry farm enterprises in Imo State of Nigeria. Specifically, it examined sources of risks, the major risks associated with poultry farm enterprise, and the risk-reducing strategies among the poultry farm enterprises in the study area. Primary data collected in 2015 with validated questionnaire from 120 proportionately and randomly selected poultry farm enterprises were used for the study. The data were analyzed with descriptive statistics and W-Statistic that was validated with Pearson Criterion (X2). The results showed that major risk sources affecting poultry farm enterprises were production, marketing, financial and political in that order. The results found a W-Statistic value of 0.789, which was verified by Pearson Criterion to obtain X2-Calculated value of 4.65 which is lower that X2-Critical value of 11.07 at 5% significant level. The risk-reducing strategies were found to be diversification, savings, co-operative marketing, borrowing, and insurance. It was recommended that government and donor agencies should make policies aimed at encouraging poultry farm enterprises adopt the highlighted risk-reducing strategies in risk management to improve their productivity and farm income.

Keywords: risk, measurement, management, poultry farm, Imo State

Procedia PDF Downloads 300
1262 Distance Learning in Vocational Mass Communication Courses during COVID-19 in Kuwait: A Media Richness Perspective of Students’ Perceptions

Authors: Husain A. Murad, Ali A. Dashti, Ali Al-Kandari

Abstract:

The outbreak of Coronavirus during the Spring semester of 2020 brought new challenges for the teaching of vocational mass communication courses at universities in Kuwait. Using the Media Richness Theory (MRT), this study examines the response of 252 university students on mass communication programs. A questionnaire regarding their perceptions and preferences concerning modes of instruction on vocational courses online, focusing on the four factors of MRT: immediacy of feedback, capacity to include personal focus, conveyance of multiple cues, and variety of language. The outcomes show that immediacy of feedback predicted all criterion variables: suitability of distance learning (DL) for teaching vocational courses, sentiments of students toward DL, perceptions of easiness of evaluation of DL coursework, and the possibility of retaking DL courses. Capacity to include personal focus was another positive predictor of the criterion variables. It predicted students’ sentiments toward DL and the possibility of retaking DL courses. The outcomes are discussed in relation to implications for using DL, as well as constructing an agenda for DL research.

Keywords: distance learning, media richness theory, traditional learning, vocational media courses

Procedia PDF Downloads 74
1261 Addressing Supply Chain Data Risk with Data Security Assurance

Authors: Anna Fowler

Abstract:

When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.

Keywords: security by design, data security architecture, cybersecurity framework, data security assurance

Procedia PDF Downloads 88
1260 Assessing Walkability in New Cities around Cairo

Authors: Lobna Ahmed Galal

Abstract:

Modal integration has given minimal consideration in cities of developing countries, as well as the declining dominance of public transport, and predominance of informal transport, the modal share of informal taxis in greater Cairo has increased from 6% in 1987 to 37% in 2001 and this has since risen even higher, informal and non-motorized modes of transport acting as a gap filler by feeding other modes of transport, not by design or choice, but often by lack of accessible and affordable public transport. Yet non-motorized transport is peripheral, with minimal priority in urban planning and investments, lacking of strong polices to support non-motorized transport, for authorities development is associated with technology and motorized transport, and promotion of non-motorized transport may be considered corresponding to development, as well as social stigma against non-motorized transport, as it is seen a travel mode for the poor. Cairo as a city of a developing country, has poor quality infrastructure for non-motorized transport, suffering from absence of dedicated corridors, and when existing they are often encroached for commercial purposes, widening traffic lanes at the expense of sidewalks, absence of footpaths, or being overcrowded, poor lighting, making walking unsafe and yet, lack of financial supply to such facilities as it is often considered beyond city capabilities. This paper will deal with the objective measuring of the built environment relating to walking, in some neighborhoods of new cities around Cairo, In addition to comparing the results of the objective measures of the built environment with the level of self-reported survey. The first paper's objective is to show how the index ‘walkability of community neighborhoods’ works in the contexts in neighborhoods of new cities around Cairo. The procedure of objective measuring is of a high potential to be carried out by using GIS.

Keywords: assessing, built environment, Cairo, walkability

Procedia PDF Downloads 383
1259 Constitutive Modeling of Different Types of Concrete under Uniaxial Compression

Authors: Mostafa Jafarian Abyaneh, Khashayar Jafari, Vahab Toufigh

Abstract:

The cost of experiments on different types of concrete has raised the demand for prediction of their behavior with numerical analysis. In this research, an advanced numerical model has been presented to predict the complete elastic-plastic behavior of polymer concrete (PC), high-strength concrete (HSC), high performance concrete (HPC) along with different steel fiber contents under uniaxial compression. The accuracy of the numerical response was satisfactory as compared to other conventional simple models such as Mohr-Coulomb and Drucker-Prager. In order to predict the complete elastic-plastic behavior of specimens including softening behavior, disturbed state concept (DSC) was implemented by nonlinear finite element analysis (NFEA) and hierarchical single surface (HISS) failure criterion, which is a failure surface without any singularity.

Keywords: disturbed state concept (DSC), hierarchical single surface (HISS) failure criterion, high performance concrete (HPC), high-strength concrete (HSC), nonlinear finite element analysis (NFEA), polymer concrete (PC), steel fibers, uniaxial compression test

Procedia PDF Downloads 311
1258 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement

Authors: Hu Zhenxing, Gao Jianxin

Abstract:

Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.

Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D

Procedia PDF Downloads 498