Search results for: differential quadrature (GDQ) method
18982 Engineering Method to Measure the Impact Sound Improvement with Floor Coverings
Authors: Katarzyna Baruch, Agata Szelag, Jaroslaw Rubacha, Bartlomiej Chojnacki, Tadeusz Kamisinski
Abstract:
Methodology used to measure the reduction of transmitted impact sound by floor coverings situated on a massive floor is described in ISO 10140-3: 2010. To carry out such tests, the standardised reverberation room separated by a standard floor from the second measuring room are required. The need to have a special laboratory results in high cost and low accessibility of this measurement. The authors propose their own engineering method to measure the impact sound improvement with floor coverings. This method does not require standard rooms and floor. This paper describes the measurement procedure of proposed engineering method. Further, verification tests were performed. Validation of the proposed method was based on the analytical model, Statistical Energy Analysis (SEA) model and empirical measurements. The received results were related to corresponding ones obtained from ISO 10140-3:2010 measurements. The study confirmed the usefulness of the engineering method.Keywords: building acoustic, impact noise, impact sound insulation, impact sound transmission, reduction of impact sound
Procedia PDF Downloads 32418981 Stabilization of Rotational Motion of Spacecrafts Using Quantized Two Torque Inputs Based on Random Dither
Authors: Yusuke Kuramitsu, Tomoaki Hashimoto, Hirokazu Tahara
Abstract:
The control problem of underactuated spacecrafts has attracted a considerable amount of interest. The control method for a spacecraft equipped with less than three control torques is useful when one of the three control torques had failed. On the other hand, the quantized control of systems is one of the important research topics in recent years. The random dither quantization method that transforms a given continuous signal to a discrete signal by adding artificial random noise to the continuous signal before quantization has also attracted a considerable amount of interest. The objective of this study is to develop the control method based on random dither quantization method for stabilizing the rotational motion of a rigid spacecraft with two control inputs. In this paper, the effectiveness of random dither quantization control method for the stabilization of rotational motion of spacecrafts with two torque inputs is verified by numerical simulations.Keywords: spacecraft control, quantized control, nonlinear control, random dither method
Procedia PDF Downloads 18018980 Using the Cluster Computing to Improve the Computational Speed of the Modular Exponentiation in RSA Cryptography System
Authors: Te-Jen Chang, Ping-Sheng Huang, Shan-Ten Cheng, Chih-Lin Lin, I-Hui Pan, Tsung- Hsien Lin
Abstract:
RSA system is a great contribution for the encryption and the decryption. It is based on the modular exponentiation. We call this system as “a large of numbers for calculation”. The operation of a large of numbers is a very heavy burden for CPU. For increasing the computational speed, in addition to improve these algorithms, such as the binary method, the sliding window method, the addition chain method, and so on, the cluster computer can be used to advance computational speed. The cluster system is composed of the computers which are installed the MPICH2 in laboratory. The parallel procedures of the modular exponentiation can be processed by combining the sliding window method with the addition chain method. It will significantly reduce the computational time of the modular exponentiation whose digits are more than 512 bits and even more than 1024 bits.Keywords: cluster system, modular exponentiation, sliding window, addition chain
Procedia PDF Downloads 52218979 An Approach for Determining and Reducing Vehicle Turnaround Time for Outbound Logistics by Using Critical Path Method
Authors: Prajakta M. Wazat, D. N. Raut
Abstract:
The study consists of a fast moving consumer goods (FMCG) beverage company wherein a portion of the supply chain which deals with outbound logistics is taken for improvement in order to reduce its logistics cost by using critical path method (CPM) method. Logistics is a major portion of the supply chain where customers are not willing to pay as it adds cost to product without adding value. In this study, it is necessary to ensure that products are delivered to clients at the right time while preserving high-quality standards from the beginning to the end of the supply chain. CPM is a logical sequencing method where in the most efficient route is achieved by arranging the series of events. CPM enables to identify a critical factor in order to minimize the delays and interruption by providing a feasible solution.Keywords: FMCG, supply chain, outbound logistics, vehicle turnaround time, critical path method, cost reduction
Procedia PDF Downloads 16518978 A Comparative Study of the Physicochemical and Structural Properties of Quinoa Protein Isolate and Yellow Squat Shrimp Byproduct Protein Isolate through pH-Shifting Modification
Authors: María José Bugueño, Natalia Jaime, Cristian Castro, Diego Naranjo, Guido Trautmann, Mario Pérez-Won, Vilbett Briones-Labarca
Abstract:
Proteins play a crucial role in various prepared foods, including dairy products, drinks, emulsions, and ready meals. These food proteins are naturally present in food waste and byproducts. The alkaline extraction and acid precipitation method is commonly used to extract proteins from plants and animals due to its product stability, cost-effectiveness, and ease of use. This study aimed to investigate the impact of pH-shifting storage at two different pH levels on the conformational changes affecting the physicochemical and functional properties of quinoa protein isolate (QPI) and yellow shrimp byproduct protein isolate (YSPI). The QPI and YSPI were extracted using the alkaline extraction-isoelectric precipitation method. The dispersions were adjusted to pH 4 or 12, stirred for 2 hours at 20°C to achieve a uniform dispersion, and then freeze-dried. Various analyses were conducted, including flexibility (F), free sulfhydryl content (Ho), emulsifying activity (EA), emulsifying capacity (EC), water holding capacity (WHC), oil holding capacity (OHC), intrinsic fluorescence, ultraviolet spectroscopy, differential scanning calorimetry (DSC), and Fourier transform infrared spectroscopy (FTIR) to assess the properties of the protein isolates. pH-shifting at pH 11 and 12 for QPI and YSPI, respectively, significantly improved protein properties, while property modification of the samples treated under acidic conditions was less pronounced. Additionally, the pH 11 and 12 treatments significantly improved F, Ho, EA, WHC, OHC, intrinsic fluorescence, ultraviolet spectroscopy, DSC, and FTIR. The increase in Ho was due to disulfide bond disruption, which produced more protein sub-units than other treatments for both proteins. This study provides theoretical support for comprehensively elucidating the functional properties of protein isolates, promoting the application of plant proteins and marine byproducts. The pH-shifting process effectively improves the emulsifying property and stability of QPI and YSPI, which can be considered potential plant-based or marine byproduct-based emulsifiers for use in the food industry.Keywords: quinoa protein, yellow shrimp by-product protein, physicochemical properties, structural properties
Procedia PDF Downloads 4318977 Element-Independent Implementation for Method of Lagrange Multipliers
Authors: Gil-Eon Jeong, Sung-Kie Youn, K. C. Park
Abstract:
Treatment for the non-matching interface is an important computational issue. To handle this problem, the method of Lagrange multipliers including classical and localized versions are the most popular technique. It essentially imposes the interface compatibility conditions by introducing Lagrange multipliers. However, the numerical system becomes unstable and inefficient due to the Lagrange multipliers. The interface element-independent formulation that does not include the Lagrange multipliers can be obtained by modifying the independent variables mathematically. Through this modification, more efficient and stable system can be achieved while involving equivalent accuracy comparing with the conventional method. A numerical example is conducted to verify the validity of the presented method.Keywords: element-independent formulation, interface coupling, methods of Lagrange multipliers, non-matching interface
Procedia PDF Downloads 40318976 CFD Modeling of Boiling in a Microchannel Based On Phase-Field Method
Authors: Rahim Jafari, Tuba Okutucu-Özyurt
Abstract:
The hydrodynamics and heat transfer characteristics of a vaporized elongated bubble in a rectangular microchannel have been simulated based on Cahn-Hilliard phase-field method. In the simulations, the initially nucleated bubble starts growing as it comes in contact with superheated water. The growing shape of the bubble compared with the available experimental data in the literature.Keywords: microchannel, boiling, Cahn-Hilliard method, simulation
Procedia PDF Downloads 42418975 Assessing Significance of Correlation with Binomial Distribution
Authors: Vijay Kumar Singh, Pooja Kushwaha, Prabhat Ranjan, Krishna Kumar Ojha, Jitendra Kumar
Abstract:
Present day high-throughput genomic technologies, NGS/microarrays, are producing large volume of data that require improved analysis methods to make sense of the data. The correlation between genes and samples has been regularly used to gain insight into many biological phenomena including, but not limited to, co-expression/co-regulation, gene regulatory networks, clustering and pattern identification. However, presence of outliers and violation of assumptions underlying Pearson correlation is frequent and may distort the actual correlation between the genes and lead to spurious conclusions. Here, we report a method to measure the strength of association between genes. The method assumes that the expression values of a gene are Bernoulli random variables whose outcome depends on the sample being probed. The method considers the two genes as uncorrelated if the number of sample with same outcome for both the genes (Ns) is equal to certainly expected number (Es). The extent of correlation depends on how far Ns can deviate from the Es. The method does not assume normality for the parent population, fairly unaffected by the presence of outliers, can be applied to qualitative data and it uses the binomial distribution to assess the significance of association. At this stage, we would not claim about the superiority of the method over other existing correlation methods, but our method could be another way of calculating correlation in addition to existing methods. The method uses binomial distribution, which has not been used until yet, to assess the significance of association between two variables. We are evaluating the performance of our method on NGS/microarray data, which is noisy and pierce by the outliers, to see if our method can differentiate between spurious and actual correlation. While working with the method, it has not escaped our notice that the method could also be generalized to measure the association of more than two variables which has been proven difficult with the existing methods.Keywords: binomial distribution, correlation, microarray, outliers, transcriptome
Procedia PDF Downloads 41518974 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection
Authors: Devadrita Dey Sarkar
Abstract:
Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)
Procedia PDF Downloads 45618973 Performance of High Efficiency Video Codec over Wireless Channels
Authors: Mohd Ayyub Khan, Nadeem Akhtar
Abstract:
Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.Keywords: AWGN, forward error correction, HEVC, video coding, QAM
Procedia PDF Downloads 14918972 Study of the Electromagnetic Resonances of a Cavity with an Aperture Using Numerical Method and Equivalent Circuit Method
Authors: Ming-Chu Yin, Ping-An Du
Abstract:
The shielding ability of a shielding cavity is affected greatly by its resonances, which include resonance modes and frequencies. The equivalent circuit method and numerical method of transmission line matrix (TLM) are used to analyze the effect of aperture-cavity coupling on electromagnetic resonances of a cavity with an aperture in this paper. Both theoretical and numerical results show that the resonance modes of a shielding cavity with an aperture can be considered as the combination of cavity and aperture inherent resonance modes with resonance frequencies shifting, and the reason of this shift is aperture-cavity coupling. Because aperture sizes are important parameters to aperture-cavity coupling, variation rules of electromagnetic resonances of a shielding cavity with its aperture sizes are given, which will be useful for the design of shielding cavities.Keywords: aperture-cavity coupling, equivalent circuit method, resonances, shielding equipment
Procedia PDF Downloads 44418971 A Bayesian Parameter Identification Method for Thermorheological Complex Materials
Authors: Michael Anton Kraus, Miriam Schuster, Geralt Siebert, Jens Schneider
Abstract:
Polymers increasingly gained interest in construction materials over the last years in civil engineering applications. As polymeric materials typically show time- and temperature dependent material behavior, which is accounted for in the context of the theory of linear viscoelasticity. Within the context of this paper, the authors show, that some polymeric interlayers for laminated glass can not be considered as thermorheologically simple as they do not follow a simple TTSP, thus a methodology of identifying the thermorheologically complex constitutive bahavioir is needed. ‘Dynamical-Mechanical-Thermal-Analysis’ (DMTA) in tensile and shear mode as well as ‘Differential Scanning Caliometry’ (DSC) tests are carried out on the interlayer material ‘Ethylene-vinyl acetate’ (EVA). A navoel Bayesian framework for the Master Curving Process as well as the detection and parameter identification of the TTSPs along with their associated Prony-series is derived and applied to the EVA material data. To our best knowledge, this is the first time, an uncertainty quantification of the Prony-series in a Bayesian context is shown. Within this paper, we could successfully apply the derived Bayesian methodology to the EVA material data to gather meaningful Master Curves and TTSPs. Uncertainties occurring in this process can be well quantified. We found, that EVA needs two TTSPs with two associated Generalized Maxwell Models. As the methodology is kept general, the derived framework could be also applied to other thermorheologically complex polymers for parameter identification purposes.Keywords: bayesian parameter identification, generalized Maxwell model, linear viscoelasticity, thermorheological complex
Procedia PDF Downloads 26318970 A Quantification Method of Attractiveness of Stations and an Estimation Method of Number of Passengers Taking into Consideration the Attractiveness of the Station
Authors: Naoya Ozaki, Takuya Watanabe, Ryosuke Matsumoto, Noriko Fukasawa
Abstract:
In the metropolitan areas in Japan, in many stations, shopping areas are set up, and escalators and elevators are installed to make the stations be barrier-free. Further, many areas around the stations are being redeveloped. Railway business operators want to know how much effect these circumstances have on attractiveness of the station or number of passengers using the station. So, we performed a questionnaire survey of the station users in the metropolitan areas for finding factors to affect the attractiveness of stations. Then, based on the analysis of the survey, we developed a method to quantitatively evaluate attractiveness of the stations. We also developed an estimation method for number of passengers based on combination of attractiveness of the station quantitatively evaluated and the residential and labor population around the station. Then, we derived precise linear regression models estimating the attractiveness of the station and number of passengers of the station.Keywords: attractiveness of the station, estimation method, number of passengers of the station, redevelopment around the station, renovation of the station
Procedia PDF Downloads 28718969 Re-identification Risk and Mitigation in Federated Learning: Human Activity Recognition Use Case
Authors: Besma Khalfoun
Abstract:
In many current Human Activity Recognition (HAR) applications, users' data is frequently shared and centrally stored by third parties, posing a significant privacy risk. This practice makes these entities attractive targets for extracting sensitive information about users, including their identity, health status, and location, thereby directly violating users' privacy. To tackle the issue of centralized data storage, a relatively recent paradigm known as federated learning has emerged. In this approach, users' raw data remains on their smartphones, where they train the HAR model locally. However, users still share updates of their local models originating from raw data. These updates are vulnerable to several attacks designed to extract sensitive information, such as determining whether a data sample is used in the training process, recovering the training data with inversion attacks, or inferring a specific attribute or property from the training data. In this paper, we first introduce PUR-Attack, a parameter-based user re-identification attack developed for HAR applications within a federated learning setting. It involves associating anonymous model updates (i.e., local models' weights or parameters) with the originating user's identity using background knowledge. PUR-Attack relies on a simple yet effective machine learning classifier and produces promising results. Specifically, we have found that by considering the weights of a given layer in a HAR model, we can uniquely re-identify users with an attack success rate of almost 100%. This result holds when considering a small attack training set and various data splitting strategies in the HAR model training. Thus, it is crucial to investigate protection methods to mitigate this privacy threat. Along this path, we propose SAFER, a privacy-preserving mechanism based on adaptive local differential privacy. Before sharing the model updates with the FL server, SAFER adds the optimal noise based on the re-identification risk assessment. Our approach can achieve a promising tradeoff between privacy, in terms of reducing re-identification risk, and utility, in terms of maintaining acceptable accuracy for the HAR model.Keywords: federated learning, privacy risk assessment, re-identification risk, privacy preserving mechanisms, local differential privacy, human activity recognition
Procedia PDF Downloads 1118968 The Use of Ward Linkage in Cluster Integration with a Path Analysis Approach
Authors: Adji Achmad Rinaldo Fernandes
Abstract:
Path analysis is an analytical technique to study the causal relationship between independent and dependent variables. In this study, the integration of Clusters in the Ward Linkage method was used in a variety of clusters with path analysis. The variables used are character (x₁), capacity (x₂), capital (x₃), collateral (x₄), and condition of economy (x₄) to on time pay (y₂) through the variable willingness to pay (y₁). The purpose of this study was to compare the Ward Linkage method cluster integration in various clusters with path analysis to classify willingness to pay (y₁). The data used are primary data from questionnaires filled out by customers of Bank X, using purposive sampling. The measurement method used is the average score method. The results showed that the Ward linkage method cluster integration with path analysis on 2 clusters is the best method, by comparing the coefficient of determination. Variable character (x₁), capacity (x₂), capital (x₃), collateral (x₄), and condition of economy (x₅) to on time pay (y₂) through willingness to pay (y₁) can be explained by 58.3%, while the remaining 41.7% is explained by variables outside the model.Keywords: cluster integration, linkage, path analysis, compliant paying behavior
Procedia PDF Downloads 18618967 Assessment of the Energy Balance Method in the Case of Masonry Domes
Authors: M. M. Sadeghi, S. Vahdani
Abstract:
Masonry dome structures had been widely used for covering large spans in the past. The seismic assessment of these historical structures is very complicated due to the nonlinear behavior of the material, their rigidness, and special stability configuration. The assessment method based on energy balance concept, as well as the standard pushover analysis, is used to evaluate the effectiveness of these methods in the case of masonry dome structures. The Soltanieh dome building is used as an example to which two methods are applied. The performance points are given from superimposing the capacity, and demand curves in Acceleration Displacement Response Spectra (ADRS) and energy coordination are compared with the nonlinear time history analysis as the exact result. The results show a good agreement between the dynamic analysis and the energy balance method, but standard pushover method does not provide an acceptable estimation.Keywords: energy balance method, pushover analysis, time history analysis, masonry dome
Procedia PDF Downloads 28118966 Increasing the Apparent Time Resolution of Tc-99m Diethylenetriamine Pentaacetic Acid Galactosyl Human Serum Albumin Dynamic SPECT by Use of an 180-Degree Interpolation Method
Authors: Yasuyuki Takahashi, Maya Yamashita, Kyoko Saito
Abstract:
In general, dynamic SPECT data acquisition needs a few minutes for one rotation. Thus, the time-activity curve (TAC) derived from the dynamic SPECT is relatively coarse. In order to effectively shorten the interval, between data points, we adopted a 180-degree interpolation method. This method is already used for reconstruction of the X-ray CT data. In this study, we applied this 180-degree interpolation method to SPECT and investigated its effectiveness.To briefly describe the 180-degree interpolation method: the 180-degree data in the second half of one rotation are combined with the 180-degree data in the first half of the next rotation to generate a 360-degree data set appropriate for the time halfway between the first and second rotations. In both a phantom and a patient study, the data points from the interpolated images fell in good agreement with the data points tracking the accumulation of 99mTc activity over time for appropriate region of interest. We conclude that data derived from interpolated images improves the apparent time resolution of dynamic SPECT.Keywords: dynamic SPECT, time resolution, 180-degree interpolation method, 99mTc-GSA.
Procedia PDF Downloads 49318965 Effectiveness of Earthing System in Vertical Configurations
Authors: S. Yunus, A. Suratman, N. Mohamad Nor, M. Othman
Abstract:
This paper presents the measurement and simulation results by Finite Element Method (FEM) for earth resistance (RDC) for interconnected vertical ground rod configurations. The soil resistivity was measured using the Wenner four-pin Method, and RDC was measured using the Fall of Potential (FOP) method, as outlined in the standard. Genetic Algorithm (GA) is employed to interpret the soil resistivity to that of a 2-layer soil model. The same soil resistivity data that were obtained by Wenner four-pin method were used in FEM for simulation. This paper compares the results of RDC obtained by FEM simulation with the real measurement at field site. A good agreement was seen for RDC obtained by measurements and FEM. This shows that FEM is a reliable software to be used for design of earthing systems. It is also found that the parallel rod system has a better performance compared to a similar setup using a grid layout.Keywords: earthing system, earth electrodes, finite element method, genetic algorithm, earth resistances
Procedia PDF Downloads 11018964 Probabilistic Modeling Laser Transmitter
Authors: H. S. Kang
Abstract:
Coupled electrical and optical model for conversion of electrical energy into coherent optical energy for transmitter-receiver link by solid state device is presented. Probability distribution for travelling laser beam switching time intervals and the number of switchings in the time interval is obtained. Selector function mapping is employed to regulate optical data transmission speed. It is established that regulated laser transmission from PhotoActive Laser transmitter follows principal of invariance. This considerably simplifies design of PhotoActive Laser Transmission networks.Keywords: computational mathematics, finite difference Markov chain methods, sequence spaces, singularly perturbed differential equations
Procedia PDF Downloads 43118963 Improvement of Ride Comfort of Turning Electric Vehicle Using Optimal Speed Control
Authors: Yingyi Zhou, Tohru Kawabe
Abstract:
With the spread of EVs (electric Vehicles), the ride comfort has been gaining a lot of attention. The influence of the lateral acceleration is important for the improvement of ride comfort of EVs as well as the longitudinal acceleration, especially upon turning of the vehicle. Therefore, this paper proposes a practical optimal speed control method to greatly improve the ride comfort in the vehicle turning situation. For consturcting this method, effective criteria that can appropriately evaluate deterioration of ride comfort is derived. The method can reduce the influence of both the longitudinal and the lateral speed changes for providing a confortable ride. From several simulation results, we can see the fact that the method can prevent aggravation of the ride comfort by suppressing the influence of longitudinal speed change in the turning situation. Hence, the effectiveness of the method is recognized.Keywords: electric vehicle, speed control, ride comfort, optimal control theory, driving support system
Procedia PDF Downloads 21518962 Urban Land Use Type Analysis Based on Land Subsidence Areas Using X-Band Satellite Image of Jakarta Metropolitan City, Indonesia
Authors: Ratih Fitria Putri, Josaphat Tetuko Sri Sumantyo, Hiroaki Kuze
Abstract:
Jakarta Metropolitan City is located on the northwest coast of West Java province with geographical location between 106º33’ 00”-107º00’00”E longitude and 5º48’30”-6º24’00”S latitude. Jakarta urban area has been suffered from land subsidence in several land use type as trading, industry and settlement area. Land subsidence hazard is one of the consequences of urban development in Jakarta. This hazard is caused by intensive human activities in groundwater extraction and land use mismanagement. Geologically, the Jakarta urban area is mostly dominated by alluvium fan sediment. The objectives of this research are to make an analysis of Jakarta urban land use type on land subsidence zone areas. The process of producing safer land use and settlements of the land subsidence areas are very important. Spatial distributions of land subsidence detection are necessary tool for land use management planning. For this purpose, Differential Synthetic Aperture Radar Interferometry (DInSAR) method is used. The DInSAR is complementary to ground-based methods such as leveling and global positioning system (GPS) measurements, yielding information in a wide coverage area even when the area is inaccessible. The data were fine tuned by using X-Band image satellite data from 2010 to 2013 and land use mapping data. Our analysis of land use type that land subsidence movement occurred on the northern part Jakarta Metropolitan City varying from 7.5 to 17.5 cm/year as industry and settlement land use type areas.Keywords: land use analysis, land subsidence mapping, urban area, X-band satellite image
Procedia PDF Downloads 27418961 Method for Assessing Potential in Distribution Logistics
Authors: B. Groß, P. Fronia, P. Nyhuis
Abstract:
In addition to the production, which is already frequently optimized, improving the distribution logistics also opens up tremendous potential for increasing an enterprise’s competitiveness. Here too though, numerous interactions need to be taken into account, enterprises thus need to be able to identify and weigh between different potentials for economically efficient optimizations. In order to be able to assess potentials, enterprises require a suitable method. This paper first briefly presents the need for this research before introducing the procedure that will be used to develop an appropriate method that not only considers interactions but is also quickly and easily implemented.Keywords: distribution logistics, evaluation of potential, methods, model
Procedia PDF Downloads 49618960 ISME: Integrated Style Motion Editor for 3D Humanoid Character
Authors: Ismahafezi Ismail, Mohd Shahrizal Sunar
Abstract:
The motion of a realistic 3D humanoid character is very important especially for the industries developing computer animations and games. However, this type of motion is seen with a very complex dimensional data as well as body position, orientation, and joint rotation. Integrated Style Motion Editor (ISME), on the other hand, is a method used to alter the 3D humanoid motion capture data utilised in computer animation and games development. Therefore, this study was carried out with the purpose of demonstrating a method that is able to manipulate and deform different motion styles by integrating Key Pose Deformation Technique and Trajectory Control Technique. This motion editing method allows the user to generate new motions from the original motion capture data using a simple interface control. Unlike the previous method, our method produces a realistic humanoid motion style in real time.Keywords: computer animation, humanoid motion, motion capture, motion editing
Procedia PDF Downloads 38218959 Analytical Solution of Blassius Equation Using the Kourosh Method
Authors: Mohammad Reza Shahnazari, Reza Kazemi, Ali Saberi
Abstract:
Most of the engineering problems are in nonlinear forms. Nonlinear boundary layer problems defined in infinite intervals contain specific complexities, especially in boundary layer condition conformance. As an example of these nonlinear complex problems, the well-known Blasius equation can be mentioned, which itself is one of the classic boundary layer problems. No analytical solution has been proposed yet for the Blasius equation due to its complexity. In this paper, an analytical method, namely the Kourosh method, based on the singularity perturbation method and the Liao homotopy analysis is utilized to solve the Blasius problem. In this method, an inner solution is developed in the [0,1] interval to expedite the solution convergence. The magnitude of the f ˝(0), as an essential quantity for determining the physical parameters, is directly calculated from the solution of the boundary condition problem. The advantages of this solution are that it does not need any numerical solution, it has a closed form and that its validation is shown in the entire [0,∞] interval. Furthermore, all of the desirable parameters could be extracted through a series of simple analytical operations from the final solution. This solution also satisfies the continuity conditions, which is one of the main contributions of this paper in comparison with most of the other proposed analytical solutions available in the literature. Comparison with numerical solutions reveals that the proposed method is highly accurate and convenient for application.Keywords: Blasius equation, boundary layer, Kourosh method, analytical solution
Procedia PDF Downloads 39118958 Research on Strategies of Building a Child Friendly City in Wuhan
Authors: Tianyue Wan
Abstract:
Building a child-friendly city (CFC) contributes to improving the quality of urbanization. It also forms a local system committed to fulfilling children's rights and development. Yet, the work related to CFC is still at the initial stage in China. Therefore, taking Wuhan, the most populous city in central China, as the pilot city would offer some reference for other cities. Based on the analysis of theories and practice examples, this study puts forward the challenges of building a child-friendly city under the particularity of China's national conditions. To handle these challenges, this study uses four methods to collect status data: literature research, site observation, research inquiry, and semantic differential (SD). And it adopts three data analysis methods: case analysis, geographic information system (GIS) analysis, and analytic hierarchy process (AHP) method. Through data analysis, this study identifies the evaluation system and appraises the current situation of Wuhan. According to the status of Wuhan's child-friendly city, this study proposes three strategies: 1) construct the evaluation system; 2) establish a child-friendly space system integrating 'point-line-surface'; 3) build a digitalized service platform. At the same time, this study suggests building a long-term mechanism for children's participation and multi-subject supervision from laws, medical treatment, education, safety protection, social welfare, and other aspects. Finally, some conclusions of strategies about CFC are tried to be drawn to promote the highest quality of life for all citizens in Wuhan.Keywords: action plan, child friendly city, construction strategy, urban space
Procedia PDF Downloads 9018957 Feasibility of Simulating External Vehicle Aerodynamics Using Spalart-Allmaras Turbulence Model with Adjoint Method in OpenFOAM and Fluent
Authors: Arpit Panwar, Arvind Deshpande
Abstract:
The study of external vehicle aerodynamics using Spalart-Allmaras turbulence model with adjoint method was conducted. The accessibility and ease of working with the Fluent module of ANSYS and OpenFOAM were considered. The objective of the study was to understand and analyze the possibility of bringing high-level aerodynamic simulation to the average consumer vehicle. A form-factor of BMW M6 vehicle was designed in Solidworks, which was analyzed in OpenFOAM and Fluent. The turbulence model being a single equation provides much faster convergence rate when clubbed with the adjoint method. Fluent being commercial software still does not allow us to solve Spalart-Allmaras turbulence model using the adjoint method. Hence, the turbulence model was solved using the SIMPLE method in Fluent. OpenFOAM being an open source provide flexibility in simulation but is not user-friendly. It supports solving the defined turbulence model with the adjoint method. The result generated from the simulation gives us acceptable values of drag, when validated with the result of percentage error in drag values for a notch-back vehicle model on an extensive simulation produced at 6th ANSA and μETA conference, Greece. The success of this approach will allow us to bring more aerodynamic vehicle body design to all segments of the automobile and not limiting it to just the high-end sports cars.Keywords: Spalart-Allmaras turbulence model, OpenFOAM, adjoint method, SIMPLE method, vehicle aerodynamic design
Procedia PDF Downloads 20018956 Spectrophotometric Determination of 5-Aminosalicylic Acid in Pharmaceutical Samples
Authors: Chand Pasha
Abstract:
A Simple, accurate and precise spectrophotometric method for the quantitative analysis of determination of 5-aminosalicylic acid is described. This method is based on the reaction of 5-aminosalicylic acid with nitrite in acid medium to form diazonium ion, which is coupled with acetylacetone in basic medium to form azo dyes, which shows absorption maxima at 470 nm. The method obeys Beer’s law in the concentration range of 0.5-11.2 gml-1 of 5-aminosalicylic acid with acetylacetone. The molar absorptivity and Sandell’s sensitivity of 5-aminosalicylic acid -acetylacetone azo dye is 2.672 ×104 lmol-1cm-1, 5.731 × 10-3 gcm-2 respectively. The dye formed is stable for 10 hrs. The optimum reaction conditions and other analytical parameters are evaluated. Interference due to foreign organic compounds have been investigated. The method has been successfully applied to the determination of 5-aminosalicylic acid in pharmaceutical samples.Keywords: spectrophotometry, diazotization, mesalazine, nitrite, acetylacetone
Procedia PDF Downloads 18918955 Efficiency of Wood Vinegar Mixed with Some Plants Extract against the Housefly (Musca domestica L.)
Authors: U. Pangnakorn, S. Kanlaya
Abstract:
The efficiency of wood vinegar mixed with each individual of three plants extract such as: citronella grass (Cymbopogon nardus), neem seed (Azadirachta indica A. Juss), and yam bean seed (Pachyrhizus erosus Urb.) were tested against the second instar larvae of housefly (Musca domestica L.). Steam distillation was used for extraction of the citronella grass while neem and yam bean were simple extracted by fermentation with ethyl alcohol. Toxicity test was evaluated in laboratory based on two methods of larvicidal bioassay: topical application method (contact poison) and feeding method (stomach poison). Larval mortality was observed daily and larval survivability was recorded until the survived larvae developed to pupae and adults. The study resulted that treatment of wood vinegar mixed with citronella grass showed the highest larval mortality by topical application method (50.0%) and by feeding method (80.0%). However, treatment of mixed wood vinegar and neem seed showed the longest pupal duration to 25 day and 32 days for topical application method and feeding method respectively. Additional, larval duration on treated M. domestica larvae was extended to 13 days for topical application method and 11 days for feeding method. Thus, the feeding method gave higher efficiency compared with the topical application method.Keywords: housefly (Musca domestica L.), neem seed (Azadirachta indica), citronella grass (Cymbopogon nardus), yam bean seed (Pachyrhizus erosus), mortality
Procedia PDF Downloads 34118954 Enhancement of 2, 4-Dichlorophenoxyacetic Acid Solubility via Solid Dispersion Technique
Authors: Tamer M. Shehata, Heba S. Elsewedy, Mashel Al Dosary, Alaa Elshehry, Mohamed A. Khedr, Maged E. Mohamed
Abstract:
Objective: 2,4-Dichlorophenoxy acetic acid (2,4-D) is a well-known herbicide widely used as a weed killer. Recently, 2,4-D was rediscovered as a new anti-inflammatory agent through in silico as well as in-vivo experiments. However, poor solubility of 2,4-D could represent a problems during pharmaceutical development in addition to lower bioavailability. Solid dispersion (SD) refers to a group of solid products consisting of at least two different components, usually a hydrophobic drug and hydrophilic matrix. It is well known technique for enhancing drug solubility. Therefore, selecting SD as a tool for enhancing 2,4-D could be of great interest to the formulator. Method: In our project, several polymers were investigated (such as PEG, HPMC, citric acid and others) in addition to drug polymer ratios and its effect on solubility. Evaluation of drug polymer interaction was investigated through both Fourier Transform Infrared (FTIR) and Differential Scanning Calorimetry (DSC). Finally, in-vivo evaluation was performed for the best selected preparation through inflammatory response of rat induce hind paw. Results: Results indicated that, citric acid 2,4-D and in ratio of 0.75 : 1 showed modified the dissolution profile of the drug. The FTIR resltes indicated no significant chemical interaction, however DSC showed shifting of the drug melting point. Finally, Carragenan induced rat hind paw edema showed significant reduction of the drug solid dispersion in comparison to the pure drug, indicating rapid and complete absorption of the drug in solid dispersion form. Conclusion: Solid dispersion technology can be utilized efficiently to enhance the solubility of 2,4-D.Keywords: solid dispersion, 2, 4-D solubility, carragenan induced edema
Procedia PDF Downloads 45318953 First Formaldehyde Retrieval Using the Raw Data Obtained from Pandora in Seoul: Investigation of the Temporal Characteristics and Comparison with Ozone Monitoring Instrument Measurement
Abstract:
In this present study, for the first time, we retrieved the Formaldehyde (HCHO) Vertical Column Density (HCHOVCD) using Pandora instruments in Seoul, a megacity in northeast Asia, for the period between 2012 and 2014 and investigated the temporal characteristics of HCHOVCD. HCHO Slant Column Density (HCHOSCD) was obtained using the Differential Optical Absorption Spectroscopy (DOAS) method. HCHOSCD was converted to HCHOVCD using geometric Air Mass Factor (AMFG) as Pandora is the direct-sun measurement. The HCHOVCDs is low at 12:00 Local Time (LT) and is high in the morning (10:00 LT) and late afternoon (16:00 LT) except for winter. The maximum (minimum) values of Pandora HCHOVCD are 2.68×1016 (1.63×10¹⁶), 3.19×10¹⁶ (2.23×10¹⁶), 2.00×10¹⁶ (1.26×10¹⁶), and 1.63×10¹⁶ (0.82×10¹⁶) molecules cm⁻² in spring, summer, autumn, and winter, respectively. In terms of seasonal variations, HCHOVCD was high in summer and low in winter which implies that photo-oxidation plays an important role in HCHO production in Seoul. In comparison with the Ozone Monitoring Instrument (OMI) measurements, the HCHOVCDs from the OMI are lower than those from Pandora. The correlation coefficient (R) between monthly HCHOVCDs values from Pandora and OMI is 0.61, with slop of 0.35. Furthermore, to understand HCHO mixing ratio within Planetary Boundary Layer (PBL) in Seoul, we converted Pandora HCHOVCDs to HCHO mixing ratio in the PBL using several meteorological input data from the Atmospheric InfraRed Sounder (AIRS). Seasonal HCHO mixing ratio in PBL converted from Pandora (OMI) HCHOVCDs are estimated to be 6.57 (5.17), 7.08 (6.68), 7.60 (4.70), and 5.00 (4.76) ppbv in spring, summer, autumn, and winter, respectively.Keywords: formaldehyde, OMI, Pandora, remote sensing
Procedia PDF Downloads 150