Search results for: full-potential KKR-green’s function method
21303 Critical Activity Effect on Project Duration in Precedence Diagram Method
Authors: Salman Ali Nisar, Koshi Suzuki
Abstract:
Precedence Diagram Method (PDM) with its additional relationships i.e., start-to-start, finish-to-finish, and start-to-finish, between activities provides more flexible schedule than traditional Critical Path Method (CPM). But, changing the duration of critical activities in PDM network will have anomalous effect on critical path. Researchers have proposed some classification of critical activity effects. In this paper, we do further study on classifications of critical activity effect and provide more information in detailed. Furthermore, we determine the maximum amount of time for each class of critical activity effect by which the project managers can control the dynamic feature (shortening/lengthening) of critical activities and project duration more efficiently.Keywords: construction project management, critical path method, project scheduling, precedence diagram method
Procedia PDF Downloads 51121302 The Bernstein Expansion for Exponentials in Taylor Functions: Approximation of Fixed Points
Authors: Tareq Hamadneh, Jochen Merker, Hassan Al-Zoubi
Abstract:
Bernstein's expansion for exponentials in Taylor functions provides lower and upper optimization values for the range of its original function. these values converge to the original functions if the degree is elevated or the domain subdivided. Taylor polynomial can be applied so that the exponential is a polynomial of finite degree over a given domain. Bernstein's basis has two main properties: its sum equals 1, and positive for all x 2 (0; 1). In this work, we prove the existence of fixed points for exponential functions in a given domain using the optimization values of Bernstein. The Bernstein basis of finite degree T over a domain D is defined non-negatively. Any polynomial p of degree t can be expanded into the Bernstein form of maximum degree t ≤ T, where we only need to compute the coefficients of Bernstein in order to optimize the original polynomial. The main property is that p(x) is approximated by the minimum and maximum Bernstein coefficients (Bernstein bound). If the bound is contained in the given domain, then we say that p(x) has fixed points in the same domain.Keywords: Bernstein polynomials, Stability of control functions, numerical optimization, Taylor function
Procedia PDF Downloads 13521301 A Heuristic Approach for the General Flowshop Scheduling Problem to Minimize the Makespan
Authors: Mohsen Ziaee
Abstract:
Almost all existing researches on the flowshop scheduling problems focus on the permutation schedules and there is insufficient study dedicated to the general flowshop scheduling problems in the literature, since the modeling and solving of the general flowshop scheduling problems are more difficult than the permutation ones, especially for the large-size problem instances. This paper considers the general flowshop scheduling problem with the objective function of the makespan (F//Cmax). We first find the optimal solution of the problem by solving a mixed integer linear programming model. An efficient heuristic method is then presented to solve the problem. An ant colony optimization algorithm is also proposed for the problem. In order to evaluate the performance of the methods, computational experiments are designed and performed. Numerical results show that the heuristic algorithm can result in reasonable solutions with low computational effort and even achieve optimal solutions in some cases.Keywords: scheduling, general flow shop scheduling problem, makespan, heuristic
Procedia PDF Downloads 20721300 Arginase Enzyme Activity in Human Serum as a Marker of Cognitive Function: The Role of Inositol in Combination with Arginine Silicate
Authors: Katie Emerson, Sara Perez-Ojalvo, Jim Komorowski, Danielle Greenberg
Abstract:
The purpose of this study was to evaluate arginase activity levels in response to combinations of an inositol-stabilized arginine silicate (ASI; Nitrosigine®), L-arginine, and Inositol. Arginine acts as a vasodilator that promotes increased blood flow resulting in enhanced delivery of oxygen and nutrients to the brain and other tissues. ASI alone has been shown to improve performance on cognitive tasks. Arginase, found in human serum, catalyzes the conversion of arginine to ornithine and urea, completing the last step in the urea cycle. Decreasing arginase levels maintains arginine and results in increased nitric oxide production. This study aimed to determine the most effective combination of ASI, L-arginine and inositol for minimizing arginase levels and therefore maximize ASI’s effect on cognition. Serum was taken from untreated healthy donors by separation from clotted factors. Arginase activity of serum in the presence or absence of test products was determined (QuantiChrom™, DARG-100, Bioassay Systems, Hayward CA). The remaining ultra-filtrated serum units were harvested and used as the source for the arginase enzyme. ASI alone or combined with varied levels of Inositol were tested as follows: ASI + inositol at 0.25 g, 0.5 g, 0.75 g, or 1.00 g. L-arginine was also tested as a positive control. All tests elicited changes in arginase activity demonstrating the efficacy of the method used. Adding L-arginine to serum from untreated subjects, with or without inositol only had a mild effect. Adding inositol at all levels reduced arginase activity. Adding 0.5 g to the standardized amount of ASI led to the lowest amount of arginase activity as compared to the 0.25g 0.75g or 1.00g doses of inositol or to L-arginine alone. The outcome of this study demonstrates an interaction of the pairing of inositol with ASI on the activity of the enzyme arginase. We found that neither the maximum nor minimum amount of inositol tested in this study led to maximal arginase inhibition. Since the inhibition of arginase activity is desirable for product formulations looking to maintain arginine levels, the most effective amount of inositol was deemed preferred. Subsequent studies suggest this moderate level of inositol in combination with ASI leads to cognitive improvements including reaction time, executive function, and concentration.Keywords: arginine, inositol, arginase, cognitive benefits
Procedia PDF Downloads 11221299 Experimental Partial Discharge Localization for Internal Short Circuits of Transformers Windings
Authors: Jalal M. Abdallah
Abstract:
This paper presents experimental studies carried out on a three phase transformer to investigate and develop the transformer models, which help in testing procedures, describing and evaluating the transformer dielectric conditions process and methods such as: the partial discharge (PD) localization in windings. The measurements are based on the transfer function methods in transformer windings by frequency response analysis (FRA). Numbers of tests conditions were applied to obtain the sensitivity frequency responses of a transformer for different type of faults simulated in a particular phase. The frequency responses were analyzed for the sensitivity of different test conditions to detect and identify the starting of small faults, which are sources of PD. In more detail, the aim is to explain applicability and sensitivity of advanced PD measurements for small short circuits and its localization. The experimental results presented in the paper will help in understanding the sensitivity of FRA measurements in detecting various types of internal winding short circuits in the transformer.Keywords: frequency response analysis (FRA), measurements, transfer function, transformer
Procedia PDF Downloads 28121298 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)
Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula
Abstract:
This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.Keywords: MINLP, mixed-integer non-linear programming, optimization, structures
Procedia PDF Downloads 4621297 Improvement of the Melon (Cucumis melo L.) through Genetic Gain and Discriminant Function
Authors: M. R. Naroui Rad, H. Fanaei, A. Ghalandarzehi
Abstract:
To find out the yield of melon, the traits are vital. This research was performed with the objective to assess the impact of nine different morphological traits on the production of 20 melon landraces in the sistan weather region. For all the traits genetic variation was noted. Minimum genetical variance (9.66) along with high genetic interaction with the environment led to low heritability (0.24) of the yield. The broad sense heritability of the traits that were included into the differentiating model was more than it was in the production. In this study, the five selected traits, number of fruit, fruit weight, fruit width, flesh diameter and plant yield can differentiate the genotypes with high or low production. This demonstrated the significance of these 5 traits in plant breeding programs. Discriminant function of these 5 traits, particularly, the weight of the fruit, in case of the current outputs was employed as an all-inclusive parameter for pointing out landraces with the highest yield. 75% of variation in yield can be explained with this index, and the weight of fruit also has substantial relation with the total production (r=0.72**). This factor can be highly beneficial in case of future breeding program selections.Keywords: melon, discriminant analysis, genetic components, yield, selection
Procedia PDF Downloads 33421296 On Hyperbolic Gompertz Growth Model (HGGM)
Authors: S. O. Oyamakin, A. U. Chukwu,
Abstract:
We proposed a Hyperbolic Gompertz Growth Model (HGGM), which was developed by introducing a stabilizing parameter called θ using hyperbolic sine function into the classical gompertz growth equation. The resulting integral solution obtained deterministically was reprogrammed into a statistical model and used in modeling the height and diameter of Pines (Pinus caribaea). Its ability in model prediction was compared with the classical gompertz growth model, an approach which mimicked the natural variability of height/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using goodness of fit tests and model selection criteria. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the compliance of the error term to normality assumptions while using testing the independence of the error term using the runs test. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic gompertz growth models better than the source model (classical gompertz growth model) while the results of R2, Adj. R2, MSE, and AIC confirmed the predictive power of the Hyperbolic Monomolecular growth models over its source model.Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, gompertz
Procedia PDF Downloads 44121295 Dynamic Reroute Modeling for Emergency Evacuation: Case Study of Brunswick City, Germany
Authors: Yun-Pang Flötteröd, Jakob Erdmann
Abstract:
The human behaviors during evacuations are quite complex. One of the critical behaviors which affect the efficiency of evacuation is route choice. Therefore, the respective simulation modeling work needs to function properly. In this paper, Simulation of Urban Mobility’s (SUMO) current dynamic route modeling during evacuation, i.e. the rerouting functions, is examined with a real case study. The result consistency of the simulation and the reality is checked as well. Four influence factors (1) time to get information, (2) probability to cancel a trip, (3) probability to use navigation equipment, and (4) rerouting and information updating period are considered to analyze possible traffic impacts during the evacuation and to examine the rerouting functions in SUMO. Furthermore, some behavioral characters of the case study are analyzed with use of the corresponding detector data and applied in the simulation. The experiment results show that the dynamic route modeling in SUMO can deal with the proposed scenarios properly. Some issues and function needs related to route choice are discussed and further improvements are suggested.Keywords: evacuation, microscopic traffic simulation, rerouting, SUMO
Procedia PDF Downloads 19421294 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms
Authors: Selim M. Khan
Abstract:
Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America
Procedia PDF Downloads 9621293 Realistic Testing Procedure of Power Swing Blocking Function in Distance Relay
Authors: Farzad Razavi, Behrooz Taheri, Mohammad Parpaei, Mehdi Mohammadi Ghalesefidi, Siamak Zarei
Abstract:
As one of the major problems in protecting large-dimension power systems, power swing and its effect on distance have caused a lot of damages to energy transfer systems in many parts of the world. Therefore, power swing has gained attentions of many researchers, which has led to invention of different methods for power swing detection. Power swing detection algorithm is highly important in distance relay, but protection relays should have general requirements such as correct fault detection, response rate, and minimization of disturbances in a power system. To ensure meeting the requirements, protection relays need different tests during development, setup, maintenance, configuration, and troubleshooting steps. This paper covers power swing scheme of the modern numerical relay protection, 7sa522 to address the effect of the different fault types on the function of the power swing blocking. In this study, it was shown that the different fault types during power swing cause different time for unblocking distance relay.Keywords: power swing, distance relay, power system protection, relay test, transient in power system
Procedia PDF Downloads 38621292 Gold Nanoparticle: Synthesis, Characterization, Clinico-Pathological, Pathological and Bio-Distribution Studies in Rabbits
Authors: M. M. Bashandy, A. R. Ahmed, M. El-Gaffary, Sahar S. Abd El-Rahman
Abstract:
This study evaluated the acute toxicity and tissue distribution of intravenously administered gold nanoparticles (AuNPs) in male rabbits. Rabbits were exposed to single dose of AuNPs (300 µg/ kg). Toxic effects were assessed via general behavior, hematological parameters, serum biochemical parameters and histopathological examination of various rabbits’ organs. Tissue distribution of AuNPs was evaluated at a dose of 300 µg/ kg in male rabbit. Inductively coupled plasma–mass spectrometry (ICP-MS) was used to determine gold concentrations in tissue samples collected at predetermined time intervals. After one week, AuNPs exerted no obvious acute toxicity in rabbits. However, inflammatory reactions in lung and liver cells were induced in rabbits treated at the300 µg/ kg dose level. The highest gold levels were found in the spleen, followed by liver, lungs and kidneys. These results indicated that AuNPs could be distributed extensively to various tissues in the body, but primarily in the spleen and liver.Keywords: gold nanoparticles, toxicity, pathology, hematology, liver function, kidney function
Procedia PDF Downloads 33521291 Adsorption of Chromium Ions from Aqueous Solution by Carbon Adsorbent
Authors: S. Heydari, H. Sharififard, M. Nabavinia, H. Kiani, M. Parvizi
Abstract:
Rapid industrialization has led to increased disposal of heavy metals into the environment. Activated carbon adsorption has proven to be an effective process for the removal of trace metal contaminants from aqueous media. This paper was investigated chromium adsorption efficiency by commercial activated carbon. The sorption studied as a function of activated carbon particle size, dose of activated carbon and initial pH of solution. Adsorption tests for the effects of these factors were designed with Taguchi approach. According to the Taguchi parameter design methodology, L9 orthogonal array was used. Analysis of experimental results showed that the most influential factor was initial pH of solution. The optimum conditions for chromium adsorption by activated carbons were found to be as follows: Initial feed pH 6, adsorbent particle size 0.412 mm and activated carbon dose 6 g/l. Under these conditions, nearly %100 of chromium ions was adsorbed by activated carbon after 2 hours.Keywords: chromium, adsorption, Taguchi method, activated carbon
Procedia PDF Downloads 40021290 Electrical and Magnetoelectric Properties of (y)Li0.5Ni0.7Zn0.05Fe2O4 + (1-y)Ba0.5Sr0.5TiO3 Magnetoelectric Composites
Authors: S. U. Durgadsimi, S. Chouguleb, S. Belladc
Abstract:
(y) Li0.5Ni0.7Zn0.05Fe2O4 + (1-y) Ba0.5Sr0.5TiO3 magnetoelectric composites with y = 0.1, 0.3 and 0.5 were prepared by a conventional standard double sintering ceramic technique. X-ray diffraction analysis confirmed the phase formation of ferrite, ferroelectric and their composites. logρdc Vs 1/T graphs reveal that the dc resistivity decreases with increasing temperature exhibiting semiconductor behavior. The plots of logσac Vs logω2 are almost linear indicating that the conductivity increases with increase in frequency i.e, conductivity in the composites is due to small polaron hopping. Dielectric constant (έ) and dielectric loss (tan δ) were studied as a function of frequency in the range 100Hz–1MHz which reveals the normal dielectric behavior except the composite with y=0.1 and as a function of temperature at four fixed frequencies (i.e. 100Hz, 1KHz, 10KHz, 100KHz). ME voltage coefficient decreases with increase in ferrite content and was observed to be maximum of about 7.495 mV/cmOe for (0.1) Li0.5Ni0.7Zn0.05Fe2O4 + (0.9) Ba0.5Sr0.5TiO3 composite.Keywords: XRD, dielectric constant, dielectric loss, DC and AC conductivity, ME voltage coefficient
Procedia PDF Downloads 34421289 An Agent-Based Modelling Simulation Approach to Calculate Processing Delay of GEO Satellite Payload
Authors: V. Vicente E. Mujica, Gustavo Gonzalez
Abstract:
The global coverage of broadband multimedia and internet-based services in terrestrial-satellite networks demand particular interests for satellite providers in order to enhance services with low latencies and high signal quality to diverse users. In particular, the delay of on-board processing is an inherent source of latency in a satellite communication that sometimes is discarded for the end-to-end delay of the satellite link. The frame work for this paper includes modelling of an on-orbit satellite payload using an agent model that can reproduce the properties of processing delays. In essence, a comparison of different spatial interpolation methods is carried out to evaluate physical data obtained by an GEO satellite in order to define a discretization function for determining that delay. Furthermore, the performance of the proposed agent and the development of a delay discretization function are together validated by simulating an hybrid satellite and terrestrial network. Simulation results show high accuracy according to the characteristics of initial data points of processing delay for Ku bands.Keywords: terrestrial-satellite networks, latency, on-orbit satellite payload, simulation
Procedia PDF Downloads 27121288 Quality Assurance for the Climate Data Store
Authors: Judith Klostermann, Miguel Segura, Wilma Jans, Dragana Bojovic, Isadora Christel Jimenez, Francisco Doblas-Reyees, Judit Snethlage
Abstract:
The Climate Data Store (CDS), developed by the Copernicus Climate Change Service (C3S) implemented by the European Centre for Medium-Range Weather Forecasts (ECMWF) on behalf of the European Union, is intended to become a key instrument for exploring climate data. The CDS contains both raw and processed data to provide information to the users about the past, present and future climate of the earth. It allows for easy and free access to climate data and indicators, presenting an important asset for scientists and stakeholders on the path for achieving a more sustainable future. The C3S Evaluation and Quality Control (EQC) is assessing the quality of the CDS by undertaking a comprehensive user requirement assessment to measure the users’ satisfaction. Recommendations will be developed for the improvement and expansion of the CDS datasets and products. User requirements will be identified on the fitness of the datasets, the toolbox, and the overall CDS service. The EQC function of the CDS will help C3S to make the service more robust: integrated by validated data that follows high-quality standards while being user-friendly. This function will be closely developed with the users of the service. Through their feedback, suggestions, and contributions, the CDS can become more accessible and meet the requirements for a diverse range of users. Stakeholders and their active engagement are thus an important aspect of CDS development. This will be achieved with direct interactions with users such as meetings, interviews or workshops as well as different feedback mechanisms like surveys or helpdesk services at the CDS. The results provided by the users will be categorized as a function of CDS products so that their specific interests will be monitored and linked to the right product. Through this procedure, we will identify the requirements and criteria for data and products in order to build the correspondent recommendations for the improvement and expansion of the CDS datasets and products.Keywords: climate data store, Copernicus, quality, user engagement
Procedia PDF Downloads 14621287 Effects of Heart Rate Variability Biofeedback to Improve Autonomic Nerve Function, Inflammatory Response and Symptom Distress in Patients with Chronic Kidney Disease: A Randomized Control Trial
Authors: Chia-Pei Chen, Yu-Ju Chen, Yu-Juei Hsu
Abstract:
The prevalence and incidence of end-stage renal disease in Taiwan ranks the highest in the world. According to the statistical survey of the Ministry of Health and Welfare in 2019, kidney disease is the ninth leading cause of death in Taiwan. It leads to autonomic dysfunction, inflammatory response and symptom distress, and further increases the damage to the structure and function of the kidneys, leading to increased demand for renal replacement therapy and risks of cardiovascular disease, which also has medical costs for the society. If we can intervene in a feasible manual to effectively regulate the autonomic nerve function of CKD patients, reduce the inflammatory response and symptom distress. To prolong the progression of the disease, it will be the main goal of caring for CKD patients. This study aims to test the effect of heart rate variability biofeedback (HRVBF) on improving autonomic nerve function (Heart Rate Variability, HRV), inflammatory response (Interleukin-6 [IL-6], C reaction protein [CRP] ), symptom distress (Piper fatigue scale, Pittsburgh Sleep Quality Index [PSQI], and Beck Depression Inventory-II [BDI-II] ) in patients with chronic kidney disease. This study was experimental research, with a convenience sampling. Participants were recruited from the nephrology clinic at a medical center in northern Taiwan. With signed informed consent, participants were randomly assigned to the HRVBF or control group by using the Excel BINOMDIST function. The HRVBF group received four weekly hospital-based HRVBF training, and 8 weeks of home-based self-practice was done with StressEraser. The control group received usual care. We followed all participants for 3 months, in which we repeatedly measured their autonomic nerve function (HRV), inflammatory response (IL-6, CRP), and symptom distress (Piper fatigue scale, PSQI, and BDI-II) on their first day of study participation (baselines), 1 month, and 3 months after the intervention to test the effects of HRVBF. The results were analyzed by SPSS version 23.0 statistical software. The data of demographics, HRV, IL-6, CRP, Piper fatigue scale, PSQI, and BDI-II were analyzed by descriptive statistics. To test for differences between and within groups in all outcome variables, it was used by paired sample t-test, independent sample t-test, Wilcoxon Signed-Rank test and Mann-Whitney U test. Results: Thirty-four patients with chronic kidney disease were enrolled, but three of them were lost to follow-up. The remaining 31 patients completed the study, including 15 in the HRVBF group and 16 in the control group. The characteristics of the two groups were not significantly different. The four-week hospital-based HRVBF training combined with eight-week home-based self-practice can effectively enhance the parasympathetic nerve performance for patients with chronic kidney disease, which may against the disease-related parasympathetic nerve inhibition. In the inflammatory response, IL-6 and CRP in the HRVBF group could not achieve significant improvement when compared with the control group. Self-reported fatigue and depression significantly decreased in the HRVBF group, but they still failed to achieve a significant difference between the two groups. HRVBF has no significant effect on improving the sleep quality for CKD patients.Keywords: heart rate variability biofeedback, autonomic nerve function, inflammatory response, symptom distress, chronic kidney disease
Procedia PDF Downloads 18021286 Implementation of a Method of Crater Detection Using Principal Component Analysis in FPGA
Authors: Izuru Nomura, Tatsuya Takino, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata
Abstract:
We propose a method of crater detection from the image of the lunar surface captured by the small space probe. We use the principal component analysis (PCA) to detect craters. Nevertheless, considering severe environment of the space, it is impossible to use generic computer in practice. Accordingly, we have to implement the method in FPGA. This paper compares FPGA and generic computer by the processing time of a method of crater detection using principal component analysis.Keywords: crater, PCA, eigenvector, strength value, FPGA, processing time
Procedia PDF Downloads 55521285 Research and Design of Functional Mixed Community: A Model Based on the Construction of New Districts in China
Authors: Wu Chao
Abstract:
The urban design of the new district in China is different from other existing cities at the city planning level, including Beijing, Shanghai, Guangzhou, etc. And the urban problems of these super-cities are same as many big cities around the world. The goal of the new district construction plan is to enable people to live comfortably, to improve the well-being of residents, and to create a way of life different from that of other urban communities. To avoid the emergence of the super community, the idea of "decentralization" is taken as the overall planning idea, and the function and form of each community are set up with a homogeneous allocation of resources so that the community can grow naturally. Similar to the growth of vines in nature, each community groups are independent and connected through roads, with clear community boundaries that limit their unlimited expansion. With a community contained 20,000 people as a case, the community is a mixture for living, production, office, entertainment, and other functions. Based on the development of the Internet, to create more space for public use, and can use data to allocate resources in real time. And this kind of shared space is the main part of the activity space in the community. At the same time, the transformation of spatial function can be determined by the usage feedback of all kinds of existing space, and the use of space can be changed by the changing data. Take the residential unit as the basic building function mass, take the lower three to four floors of the building as the main flexible space for use, distribute functions such as entertainment, service, office, etc. For the upper living space, set up a small amount of indoor and outdoor activity space, also used as shared space. The transformable space of the bottom layer is evenly distributed, combined with the walking space connected the community, the service and entertainment network can be formed in the whole community, and can be used in most of the community space. With the basic residential unit as the replicable module, the design of the other residential units runs through the idea of decentralization and the concept of the vine community, and the various units are reasonably combined. At the same time, a small number of office buildings are added to meet the special office needs. The new functional mixed community can change many problems of the present city in the future construction, at the same time, it can keep its vitality through the adjustment function of the Internet.Keywords: decentralization, mixed functional community, shared space, spatial usage data
Procedia PDF Downloads 12321284 Developing and Shake Table Testing of Semi-Active Hydraulic Damper as Active Interaction Control Device
Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung
Abstract:
Semi-active control system for structure under excitation of earthquake provides with the characteristics of being adaptable and requiring low energy. DSHD (Displacement Semi-Active Hydraulic Damper) was developed by our research team. Shake table test results of this DSHD installed in full scale test structure demonstrated that this device brought its energy-dissipating performance into full play for test structure under excitation of earthquake. The objective of this research is to develop a new AIC (Active Interaction Control Device) and apply shake table test to perform its dissipation of energy capability. This new proposed AIC is converting an improved DSHD (Displacement Semi-Active Hydraulic Damper) to AIC with the addition of an accumulator. The main concept of this energy-dissipating AIC is to apply the interaction function of affiliated structure (sub-structure) and protected structure (main structure) to transfer the input seismic force into sub-structure to reduce the structural deformation of main structure. This concept is tested using full-scale multi-degree of freedoms test structure, installed with this proposed AIC subjected to external forces of various magnitudes, for examining the shock absorption influence of predictive control, stiffness of sub-structure, synchronous control, non-synchronous control and insufficient control position. The test results confirm: (1) this developed device is capable of diminishing the structural displacement and acceleration response effectively; (2) the shock absorption of low precision of semi-active control method did twice as much seismic proof efficacy as that of passive control method; (3) active control method may not exert a negative influence of amplifying acceleration response of structure; (4) this AIC comes into being time-delay problem. It is the same problem of ordinary active control method. The proposed predictive control method can overcome this defect; (5) condition switch is an important characteristics of control type. The test results show that synchronism control is very easy to control and avoid stirring high frequency response. This laboratory results confirm that the device developed in this research is capable of applying the mutual interaction between the subordinate structure and the main structure to be protected is capable of transforming the quake energy applied to the main structure to the subordinate structure so that the objective of minimizing the deformation of main structural can be achieved.Keywords: DSHD (Displacement Semi-Active Hydraulic Damper), AIC (Active Interaction Control Device), shake table test, full scale structure test, sub-structure, main-structure
Procedia PDF Downloads 51921283 Adequate Dietary Intake to Improve Outcome of Urine: Urea Nitrogen with Balance Nitrogen and Total Lymphocyte Count
Authors: Mardiana Madjid, Nurpudji Astuti Taslim, Suryani As'ad, Haerani Rasyid, Agussalim Bukhari
Abstract:
The high level of Urine Urea Nitrogen (UUN) indicates hypercatabolism occurs in hospitalized patients. High levels of Total Lymphocyte Count (TLC) indicates the immune system condition, adequate wound healing, and limit complication. Adequate dietary intake affects to decrease of hypercatabolism status in treated patient’s hospitals. Nitrogen Balance (NB) is simply the difference between nitrogen (N₂) intake and output. If more N₂ intake than output, then positive NB or anabolic will occur. This study aims to evaluate the effect of dietary intake in influencing balance nitrogen and total lymphocyte count. Method: A total of 43 patients admitted to a Wahidin Sudirohusodo Hospital between 2018 and 2019 for 10 days' treats are included. The inclusion criteria were patients who were treated for 10 days and receives food from the hospital orally. Patients did not experience gastrointestinal disorders such as vomiting and diarrhea and experience impair kidney function and liver function and expressed approval to participate in this study. During hospitalization, food intake, UUN, albumin serum, balance nitrogen, and TLC was assessed twice on day 1 and day 10. There is no Physician Clinical Nutritional intervention to correct food intake. UUN is 24 hours of urine collected on the second day after admission and the tenth day. Statistical analysis uses SPSS 24 with observational cohort methods. Result: The Forty-three participants completed the follow-up (27 men and 18 women). The age of fewer than 4 years is 22 people, 45 to 60 years is 16 people, and over 60 years is 4 people. The result of the study on day 1 obtained SGA score A, SGA score B, SGA score C are 8, 32, 3 until day 10 are 8, 31, 4, respectively. According to 24h dietary recalls, the energy intake during observation was from 522.5 ± 400.4 to 1011.9 ± 545.1 kcal/day P < 0.05, protein intake from 20.07 ± 17.2 to 40.3 ± 27.3 g/day P < 0.05, carbohydrates from 92.5 ± 71.6 to 184.8 ± 87.4 g/day, and fat from 5.5 ± 3.86 to 13.9 ± 13.9 g/day. The UUN during the observation was from 6.6 ± 7.3 to 5.5 ± 3.9 g/day, TLC decreased from 1622.9 ± 897.2 to 1319.9 ± 636.3/mm³ value target 1800/mm³, albumin serum from 3.07 ± 0.76 to 2.9 ± 0.57 g/day, and BN from -7.5 ± 7.2 to -3.1 ± 4.86. Conclusion: The high level of UUN needs to correct adequate dietary intake to improve NB and TLC status on hospitalized patients.Keywords: adequate dietary intake, balance nitrogen, total lymphocyte count, urine urea nitrogen
Procedia PDF Downloads 12421282 MapReduce Logistic Regression Algorithms with RHadoop
Authors: Byung Ho Jung, Dong Hoon Lim
Abstract:
Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.Keywords: big data, logistic regression, MapReduce, RHadoop
Procedia PDF Downloads 28521281 Impact Evaluation of Discriminant Analysis on Epidemic Protocol in Warships’s Scenarios
Authors: Davi Marinho de Araujo Falcão, Ronaldo Moreira Salles, Paulo Henrique Maranhão
Abstract:
Disruption Tolerant Networks (DTN) are an evolution of Mobile Adhoc Networks (MANET) and work good in scenarioswhere nodes are sparsely distributed, with low density, intermittent connections and an end-to-end infrastructure is not possible to guarantee. Therefore, DTNs are recommended for high latency applications that can last from hours to days. The maritime scenario has mobility characteristics that contribute to a DTN network approach, but the concern with data security is also a relevant aspect in such scenarios. Continuing the previous work, which evaluated the performance of some DTN protocols (Epidemic, Spray and Wait, and Direct Delivery) in three warship scenarios and proposed the application of discriminant analysis, as a classification technique for secure connections, in the Epidemic protocol, thus, the current article proposes a new analysis of the directional discriminant function with opening angles smaller than 90 degrees, demonstrating that the increase in directivity influences the selection of a greater number of secure connections by the directional discriminant Epidemic protocol.Keywords: DTN, discriminant function, epidemic protocol, security, tactical messages, warship scenario
Procedia PDF Downloads 19121280 Stoa: Urban Community-Building Social Experiment through Mixed Reality Game Environment
Authors: Radek Richtr, Petr Pauš
Abstract:
Social media nowadays connects people more tightly and intensively than ever, but simultaneously, some sort of social distance, incomprehension, lost of social integrity appears. People can be strongly connected to the person on the other side of the world but unaware of neighbours in the same district or street. The Stoa is a type of application from the ”serious games” genre- it is research augmented reality experiment masked as a gaming environment. In the Stoa environment, the player can plant and grow virtual (organic) structure, a Pillar, that represent the whole suburb. Everybody has their own idea of what is an acceptable, admirable or harmful visual intervention in the area they live in; the purpose of this research experiment is to find and/or define residents shared subconscious spirit, genius loci of the Pillars vicinity, where residents live in. The appearance and evolution of Stoa’s Pillars reflect the real world as perceived by not only the creator but also by other residents/players, who, with their actions, refine the environment. Squares, parks, patios and streets get their living avatar depictions; investors and urban planners obtain information on the occurrence and level of motivation for reshaping the public space. As the project is in product conceptual design phase, the function is one of its most important factors. Function-based modelling makes design problem modular and structured and thus decompose it into sub-functions or function-cells. Paper discuss the current conceptual model for Stoa project, the using of different organic structure textures and models, user interface design, UX study and project’s developing to the final state.Keywords: augmented reality, urban computing, interaction design, mixed reality, social engineering
Procedia PDF Downloads 22821279 The Role of Mobile Applications on Consumerism Case Study: Snappfood Application
Authors: Vajihe Fasihi
Abstract:
With the advancement of technology and the expansion of the Internet, a significant change in lifestyle and consumption can be seen in societies. The increasing number of mobile applications (such as SnappFood) has expanded the scope of using apps for wider access to services to citizens and meets the needs of a large number of citizens in the shortest time and with reasonable quality. First, this article seeks to understand the concept and function of the Internet distribution network on the Iranian society, which was investigated in a smaller sample (students of the Faculty of Social Sciences of the Tehran university ) and uses the semi-structured interview method, and then explores the concept of consumerism. The main issue of this research is the effect of mobile apps, especially SnappFood, on increasing consumption and the difference between real needs and false needs among consumers. The findings of this research show that the use of the mentioned program has been effective in increasing the false needs of the sample community and has led to the phenomenon of consumerism.Keywords: consumerism economics, false needs, mobile applications, reel needs
Procedia PDF Downloads 5721278 Development of Probability Distribution Models for Degree of Bending (DoB) in Chord Member of Tubular X-Joints under Bending Loads
Authors: Hamid Ahmadi, Amirreza Ghaffari
Abstract:
Fatigue life of tubular joints in offshore structures is not only dependent on the value of hot-spot stress, but is also significantly influenced by the through-the-thickness stress distribution characterized by the degree of bending (DoB). The DoB exhibits considerable scatter calling for greater emphasis in accurate determination of its governing probability distribution which is a key input for the fatigue reliability analysis of a tubular joint. Although the tubular X-joints are commonly found in offshore jacket structures, as far as the authors are aware, no comprehensive research has been carried out on the probability distribution of the DoB in tubular X-joints. What has been used so far as the probability distribution of the DoB in reliability analyses is mainly based on assumptions and limited observations, especially in terms of distribution parameters. In the present paper, results of parametric equations available for the calculation of the DoB have been used to develop probability distribution models for the DoB in the chord member of tubular X-joints subjected to four types of bending loads. Based on a parametric study, a set of samples was prepared and density histograms were generated for these samples using Freedman-Diaconis method. Twelve different probability density functions (PDFs) were fitted to these histograms. The maximum likelihood method was utilized to determine the parameters of fitted distributions. In each case, Kolmogorov-Smirnov test was used to evaluate the goodness of fit. Finally, after substituting the values of estimated parameters for each distribution, a set of fully defined PDFs have been proposed for the DoB in tubular X-joints subjected to bending loads.Keywords: tubular X-joint, degree of bending (DoB), probability density function (PDF), Kolmogorov-Smirnov goodness-of-fit test
Procedia PDF Downloads 71921277 An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries
Authors: Evangelos G. Karvelas, Christos Liosis, Andreas Theodorakakos, Theodoros E. Karakasidis
Abstract:
In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.Keywords: computational fluid dynamics, CFD, covariance matrix adaptation evolution strategy, discrete element method, DEM, magnetic navigation, spherical particles
Procedia PDF Downloads 14221276 Meta-Instruction Theory in Mathematics Education and Critique of Bloom’s Theory
Authors: Abdollah Aliesmaeili
Abstract:
The purpose of this research is to present a different perspective on the basic math teaching method called meta-instruction, which reverses the learning path. Meta-instruction is a method of teaching in which the teaching trajectory starts from brain education into learning. This research focuses on the behavior of the mind during learning. In this method, students are not instructed in mathematics, but they are educated. Another goal of the research is to "criticize Bloom's classification in the cognitive domain and reverse it", because it cannot meet the educational and instructional needs of the new generation and "substituting math education instead of math teaching". This is an indirect method of teaching. The method of research is longitudinal through four years. Statistical samples included students ages 6 to 11. The research focuses on improving the mental abilities of children to explore mathematical rules and operations by playing only with eight measurements (any years 2 examinations). The results showed that there is a significant difference between groups in remembering, understanding, and applying. Moreover, educating math is more effective than instructing in overall learning abilities.Keywords: applying, Bloom's taxonomy, brain education, mathematics teaching method, meta-instruction, remembering, starmath method, understanding
Procedia PDF Downloads 2321275 Effect of Type of Pile and Its Installation Method on Pile Bearing Capacity by Physical Modelling in Frustum Confining Vessel
Authors: Seyed Abolhasan Naeini, M. Mortezaee
Abstract:
Various factors such as the method of installation, the pile type, the pile material and the pile shape, can affect the final bearing capacity of a pile executed in the soil; among them, the method of installation is of special importance. The physical modeling is among the best options in the laboratory study of the piles behavior. Therefore, the current paper first presents and reviews the frustum confining vesel (FCV) as a suitable tool for physical modeling of deep foundations. Then, by describing the loading tests of two open-ended and closed-end steel piles, each of which has been performed in two methods, “with displacement" and "without displacement", the effect of end conditions and installation method on the final bearing capacity of the pile is investigated. The soil used in the current paper is silty sand of Firoozkooh. The results of the experiments show that in general the without displacement installation method has a larger bearing capacity in both piles, and in a specific method of installation the closed ended pile shows a slightly higher bearing capacity.Keywords: physical modeling, frustum confining vessel, pile, bearing capacity, installation method
Procedia PDF Downloads 15321274 Seismic Fragility Functions of RC Moment Frames Using Incremental Dynamic Analyses
Authors: Seung-Won Lee, JongSoo Lee, Won-Jik Yang, Hyung-Joon Kim
Abstract:
A capacity spectrum method (CSM), one of methodologies to evaluate seismic fragilities of building structures, has been long recognized as the most convenient method, even if it contains several limitations to predict the seismic response of structures of interest. This paper proposes the procedure to estimate seismic fragility curves using an incremental dynamic analysis (IDA) rather than the method adopting a CSM. To achieve the research purpose, this study compares the seismic fragility curves of a 5-story reinforced concrete (RC) moment frame obtained from both methods, an IDA method and a CSM. Both seismic fragility curves are similar in slight and moderate damage states whereas the fragility curve obtained from the IDA method presents less variation (or uncertainties) in extensive and complete damage states. This is due to the fact that the IDA method can properly capture the structural response beyond yielding rather than the CSM and can directly calculate higher mode effects. From these observations, the CSM could overestimate seismic vulnerabilities of the studied structure in extensive or complete damage states.Keywords: seismic fragility curve, incremental dynamic analysis, capacity spectrum method, reinforced concrete moment frame
Procedia PDF Downloads 422