Search results for: optimization procedure
1632 Knowledge Management Efficiency of Personnel in Rajamangala University of Technology Srivijaya Songkhla, Thailand
Authors: Nongyao Intasaso, Atchara Rattanama, Navarat Pewnual
Abstract:
This research is survey research purposed to study the factor affected to knowledge management efficiency of personnel in Rajamangala University of Technology Srivijaya, and study the problem of knowledge management affected to knowledge development of personnel in the university. The tool used in this study is structures questioner standardize rating scale in 5 levels. The sample selected by purposive sampling and there are 137 participation calculated in 25% of population. The result found that factor affected to knowledge management efficiency in the university included (1) result from the organization factor found that the university provided project or activity that according to strategy and mission of knowledge management affected to knowledge management efficiency in highest level (x̅ = 4.30) (2) result from personnel factor found that the personnel are eager for knowledge and active to learning to develop themselves and work (Personal Mastery) affected to knowledge management efficiency in high level (x̅ = 3.75) (3) result from technological factor found that the organization brought multimedia learning aid to facilitate learning process affected to knowledge management efficiency in high level (x̅ = 3.70) and (4) the result from learning factor found that the personnel communicated and sharing knowledge and opinion based on acceptance to each other affected to knowledge management efficiency in high level (x̅ = 3.78). The problem of knowledge management in the university included the personnel do not change their work behavior, insufficient of collaboration, lack of acceptance in knowledge and experience to each other, and limited budget. The solutions to solve these problems are the university should be support sufficient budget, the university should follow up and evaluate organization development based on knowledge using, the university should provide the activity emphasize to personnel development and assign the committee to process and report knowledge management procedure.Keywords: knowledge management, efficiency, personnel, learning process
Procedia PDF Downloads 2981631 Visual Improvement Outcome of Pars Plana Vitrectomy Combined Endofragmentation and Secondary IOL Implantation for Dropped Nucleus After Cataract Surgery : A Case Report
Authors: Saut Samuel Simamora
Abstract:
PURPOSE: Nucleus drop is one of the most feared and severe complications of modern cataract surgery. The lens material may drop through iatrogenic breaks of the posterior capsule. The incidence of the nucleus as the complication of phacoemulsification increases concomitant to the increased frequency of phacoemulsification. Pars plana vitrectomy (PPV) followed by endofragmentation and secondary intraocular lens (IOL) implantation is the choice of management procedure. This case report aims to present the outcome of PPV for the treatment dropped nucleus after cataract surgery METHODS: A 65 year old female patient came to Vitreoretina department with chief complaints blurry vision in her left eye after phacoemulsification one month before. Ophthalmological examination revealed visual acuity of the right eye (VA RE) was 6/15, and the left eye (VA LE) was hand movement. The intraocular pressure (IOP) on the right eye was 18 mmHg, and on the left eye was 59 mmHg. On her left eye, there were aphakic, dropped lens nucleus and secondary glaucoma.RESULTS: The patient got antiglaucoma agent until her IOP was decreased. She underwent pars plana vitrectomy to remove dropped nucleus and iris fixated IOL. One week post operative evaluation revealed VA LE was 6/7.5 and iris fixated IOL in proper position. CONCLUSIONS: Nucleus drop generally occurs in phacoemulsification cataract surgery techniques. Retained lens nucleus or fragments in the vitreous may cause severe intraocular inflammation leading to secondary glaucoma. The proper and good management for retained lens fragments in nucleus drop give excellent outcome to patient.Keywords: secondary glaucoma, complication of phacoemulsification, nucleus drop, pars plana vitrectomy
Procedia PDF Downloads 781630 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 401629 A Cloud-Based Spectrum Database Approach for Licensed Shared Spectrum Access
Authors: Hazem Abd El Megeed, Mohamed El-Refaay, Norhan Magdi Osman
Abstract:
Spectrum scarcity is a challenging obstacle in wireless communications systems. It hinders the introduction of innovative wireless services and technologies that require larger bandwidth comparing to legacy technologies. In addition, the current worldwide allocation of radio spectrum bands is already congested and can not afford additional squeezing or optimization to accommodate new wireless technologies. This challenge is a result of accumulative contributions from different factors that will be discussed later in this paper. One of these factors is the radio spectrum allocation policy governed by national regulatory authorities nowadays. The framework for this policy allocates specified portion of radio spectrum to a particular wireless service provider on exclusive utilization basis. This allocation is executed according to technical specification determined by the standard bodies of each Radio Access Technology (RAT). Dynamic access of spectrum is a framework for flexible utilization of radio spectrum resources. In this framework there is no exclusive allocation of radio spectrum and even the public safety agencies can share their spectrum bands according to a governing policy and service level agreements. In this paper, we explore different methods for accessing the spectrum dynamically and its associated implementation challenges.Keywords: licensed shared access, cognitive radio, spectrum sharing, spectrum congestion, dynamic spectrum access, spectrum database, spectrum trading, reconfigurable radio systems, opportunistic spectrum allocation (OSA)
Procedia PDF Downloads 4271628 A Condition-Based Maintenance Policy for Multi-Unit Systems Subject to Deterioration
Authors: Nooshin Salari, Viliam Makis
Abstract:
In this paper, we propose a condition-based maintenance policy for multi-unit systems considering the existence of economic dependency among units. We consider a system composed of N identical units, where each unit deteriorates independently. Deterioration process of each unit is modeled as a three-state continuous time homogeneous Markov chain with two working states and a failure state. The average production rate of units varies in different working states and demand rate of the system is constant. Units are inspected at equidistant time epochs, and decision regarding performing maintenance is determined by the number of units in the failure state. If the total number of units in the failure state exceeds a critical level, maintenance is initiated, where units in failed state are replaced correctively and deteriorated state units are maintained preventively. Our objective is to determine the optimal number of failed units to initiate maintenance minimizing the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. A numerical example is developed to demonstrate the proposed policy and the comparison with the corrective maintenance policy is presented.Keywords: reliability, maintenance optimization, semi-Markov decision process, production
Procedia PDF Downloads 1631627 Analysis of Influence of Geometrical Set of Nozzles on Aerodynamic Drag Level of a Hero’s Based Steam Turbine
Authors: Mateusz Paszko, Miroslaw Wendeker, Adam Majczak
Abstract:
High temperature waste energy offers a number of management options. The most common energy recuperation systems, that are actually used to utilize energy from the high temperature sources are steam turbines working in a high pressure and temperature closed cycles. Due to the high costs of production of energy recuperation systems, especially rotary turbine discs equipped with blades, currently used solutions are limited in use with waste energy sources of temperature below 100 °C. This study presents the results of simulating the flow of the water vapor in various configurations of flow ducts in a reaction steam turbine based on Hero’s steam turbine. The simulation was performed using a numerical model and the ANSYS Fluent software. Simulation computations were conducted with use of the water vapor as an internal agent powering the turbine, which is fully safe for an environment in case of a device failure. The conclusions resulting from the conducted numerical computations should allow for optimization of the flow ducts geometries, in order to achieve the greatest possible efficiency of the turbine. It is expected that the obtained results should be useful for further works related to the development of the final version of a low drag steam turbine dedicated for low cost energy recuperation systems.Keywords: energy recuperation, CFD analysis, waste energy, steam turbine
Procedia PDF Downloads 2051626 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet
Authors: Justin Woulfe
Abstract:
Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics
Procedia PDF Downloads 1581625 Technical and Economic Evaluation of Harmonic Mitigation from Offshore Wind Power Plants by Transmission Owners
Authors: A. Prajapati, K. L. Koo, F. Ghassemi, M. Mulimakwenda
Abstract:
In the UK, as the volume of non-linear loads connected to transmission grid continues to rise steeply, the harmonic distortion levels on transmission network are becoming a serious concern for the network owners and system operators. This paper outlines the findings of the study conducted to verify the proposal that the harmonic mitigation could be optimized and can be managed economically and effectively at the transmission network level by the Transmission Owner (TO) instead of the individual polluter connected to the grid. Harmonic mitigation studies were conducted on selected regions of the transmission network in England for recently connected offshore wind power plants to strategize and optimize selected harmonic filter options. The results – filter volume and capacity – were then compared against the mitigation measures adopted by the individual connections. Estimation ratios were developed based on the actual installed and optimal proposed filters. These estimation ratios were then used to derive harmonic filter requirements for future contracted connections. The study has concluded that a saving of 37% in the filter volume/capacity could be achieved if the TO is to centrally manage the harmonic mitigation instead of individual polluter installing their own mitigation solution.Keywords: C-type filter, harmonics, optimization, offshore wind farms, interconnectors, HVDC, renewable energy, transmission owner
Procedia PDF Downloads 1551624 Sleep Disturbance in Indonesian School-Aged Children and Its Relationship to Nutritional Aspect
Authors: William Cheng, Rini Sekartini
Abstract:
Background: Sleep is essential for children because it provides enhancement for the neural system activities that give physiologic effects for the body to support growth and development. One of the modifiable factors that relates with sleep is nutrition, which includes nutritional status, iron intake, and magnesium intake. Nutritional status represents the balance between nutritional intake and expenditure, while iron and magnesium are micronutrients that are related to sleep regulation. The aim of this study is to identify prevalence of sleep disturbance among Indonesian children and to evaluate its relation with aspect to nutrition. Methods : A cross-sectional study involving children aged 5 to 7-years-old in an urban primary health care between 2012 and 2013 was carried out. Related data includes anthropometric status, iron intake, and magnesium intake. Iron and magnesium intake was obtained by 24-hours food recall procedure. Sleep Disturbance Scale for Children (SDSC) was used as the diagnostic tool for sleep disturbance, with score under 39 indicating presence of problem. Results: Out of 128 school-aged children included in this study, 28 (23,1%) of them were found to have sleep disturbance. The majority of children had good nutritional status, with only 15,7% that were severely underweight or underweight, and 12,4% that were identified as stunted. On the contrary, 99 children (81,8%) were identified to have inadequate magnesium intake and 56 children (46,3%) with inadequate iron intake. Our analysis showed there was no significant relation between all of the nutritional status indicators and sleep disturbance (p>0,05%). Moreover, inadequate iron and magnesium intake also failed to prove significant relation with sleep disturbance in this population. Conclusion: Almost fourth of school-aged children in Indonesia were found to have sleep disturbance and further study are needed to overcome this problem. According to our finding, there is no correlation between nutritional status, iron intake, magnesium intake, and sleep disturbance.Keywords: iron intake, magnesium intake, nutritional status, school-aged children, sleep disturbance
Procedia PDF Downloads 4651623 An Integrated Label Propagation Network for Structural Condition Assessment
Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong
Abstract:
Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation
Procedia PDF Downloads 931622 Work System Design in Productivity for Small and Medium Enterprises: A Systematic Literature Review
Authors: Silipa Halofaki, Devi R. Seenivasagam, Prashant Bijay, Kritin Singh, Rajeshkannan Ananthanarayanan
Abstract:
This comprehensive literature review delves into the effects and applications of work system design on the performance of Small and Medium-sized Enterprises (SMEs). The review process involved three independent reviewers who screened 514 articles through a four-step procedure: removing duplicates, assessing keyword relevance, evaluating abstract content, and thoroughly reviewing full-text articles. Various criteria, such as relevance to the research topic, publication type, study type, language, publication date, and methodological quality, were employed to exclude certain publications. A portion of articles that met the predefined inclusion criteria were included as a result of this systematic literature review. These selected publications underwent data extraction and analysis to compile insights regarding the influence of work system design on SME performance. Additionally, the quality of the included studies was assessed, and the level of confidence in the body of evidence was established. The findings of this review shed light on how work system design impacts SME performance, emphasizing important implications and applications. Furthermore, the review offers suggestions for further research in this critical area and summarizes the current state of knowledge in the field. Understanding the intricate connections between work system design and SME success can enhance operational efficiency, employee engagement, and overall competitiveness for SMEs. This comprehensive examination of the literature contributes significantly to both academic research and practical decision-making for SMEs.Keywords: literature review, productivity, small and medium sized enterprises-SMEs, work system design
Procedia PDF Downloads 921621 Finite Volume Method Simulations of GaN Growth Process in MOVPE Reactor
Authors: J. Skibinski, P. Caban, T. Wejrzanowski, K. J. Kurzydlowski
Abstract:
In the present study, numerical simulations of heat and mass transfer during gallium nitride growth process in Metal Organic Vapor Phase Epitaxy reactor AIX-200/4RF-S is addressed. Existing knowledge about phenomena occurring in the MOVPE process allows to produce high quality nitride based semiconductors. However, process parameters of MOVPE reactors can vary in certain ranges. Main goal of this study is optimization of the process and improvement of the quality of obtained crystal. In order to investigate this subject a series of computer simulations have been performed. Numerical simulations of heat and mass transfer in GaN epitaxial growth process have been performed to determine growth rate for various mass flow rates and pressures of reagents. According to the fact that it’s impossible to determine experimentally the exact distribution of heat and mass transfer inside the reactor during the process, modeling is the only solution to understand the process precisely. Main heat transfer mechanisms during MOVPE process are convection and radiation. Correlation of modeling results with the experiment allows to determine optimal process parameters for obtaining crystals of highest quality.Keywords: Finite Volume Method, semiconductors, epitaxial growth, metalorganic vapor phase epitaxy, gallium nitride
Procedia PDF Downloads 3951620 Optimization of Bio-Diesel Production from Rubber Seed Oils
Authors: Pawit Tangviroon, Apichit Svang-Ariyaskul
Abstract:
Rubber seed oil is an attractive alternative feedstock for biodiesel production because it is not related to food-chain plant. Rubber seed oil contains large amount of free fatty acids, which causes problem in biodiesel production. Free fatty acids can react with alkaline catalyst in biodiesel production. Acid esterification is used as pre-treatment to convert unwanted compound to desirable biodiesel. Phase separation of oil and methanol occurs at low ratio of methanol to oil and causes low reaction rate and conversion. Acid esterification requires large excess of methanol in order to increase the miscibility of methanol in oil and accordingly, it is a more expensive separation process. In this work, the kinetics of esterification of rubber seed oil with methanol is developed from available experimental results. Reactive distillation process was designed by using Aspen Plus program. The effects of operating parameters such as feed ratio, molar reflux ratio, feed temperature, and feed stage are investigated in order to find the optimum conditions. Results show that the reactive distillation process is proved to be better than conventional process. It consumes less feed methanol and less energy while yielding higher product purity than the conventional process. This work can be used as a guideline for further development to industrial scale of biodiesel production using reactive distillation.Keywords: biodiesel, reactive distillation, rubber seed oil, transesterification
Procedia PDF Downloads 3501619 An Investigation of the Use of Visible Spectrophotometric Analysis of Lead in an Herbal Tea Supplement
Authors: Salve Alessandria Alcantara, John Armand E. Aquino, Ma. Veronica Aranda, Nikki Francine Balde, Angeli Therese F. Cruz, Elise Danielle Garcia, Antonie Kyna Lim, Divina Gracia Lucero, Nikolai Thadeus Mappatao, Maylan N. Ocat, Jamille Dyanne L. Pajarillo, Jane Mierial A. Pesigan, Grace Kristin Viva, Jasmine Arielle C. Yap, Kathleen Michelle T. Yu, Joanna J. Orejola, Joanna V. Toralba
Abstract:
Lead is a neurotoxic metallic element that is slowly accumulated in bones and tissues especially if present in products taken in a regular basis such as herbal tea supplements. Although sensitive analytical instruments are already available, the USP limit test for lead is still widely used. However, because of its serious shortcomings, Lang Lang and his colleagues developed a spectrophotometric method for determination of lead in all types of samples. This method was the one adapted in this study. The actual procedure performed was divided into three parts: digestion, extraction and analysis. For digestion, HNO3 and CH3COOH were used. Afterwards, masking agents, 0.003% and 0.001% dithizone in CHCl3 were added and used for the extraction. For the analysis, standard addition method and colorimetry were performed. This was done in triplicates under two conditions. The 1st condition, using 25µg/mL of standard, resulted to very low absorbances with an r2 of 0.551. This led to the use of a higher concentration, 1mg/mL, for condition 2. Precipitation of lead cyanide was observed and the absorbance readings were relatively higher but between 0.15-0.25, resulting to a very low r2 of 0.429. LOQ and LOD were not computed due to the limitations of the Milton-Roy Spectrophotometer. The method performed has a shorter digestion time, and used less but more accessible reagents. However, the optimum ratio of dithizone-lead complex must be observed in order to obtain reliable results while exploring other concentration of standards.Keywords: herbal tea supplement, lead-dithizone complex, standard addition, visible spectroscopy
Procedia PDF Downloads 3841618 Maximizing Bidirectional Green Waves for Major Road Axes
Authors: Christian Liebchen
Abstract:
Both from an environmental perspective and with respect to road traffic flow quality, planning so-called green waves along major road axes is a well-established target for traffic engineers. For one-way road axes (e.g. the Avenues in Manhattan), this is a trivial downstream task. For bidirectional arterials, the well-known necessary condition for establishing a green wave in both directions is that the driving times between two subsequent crossings must be an integer multiple of half of the cycle time of the signal programs at the nodes. In this paper, we propose an integer linear optimization model to establish fixed-time green waves in both directions that are as long and as wide as possible, even in the situation where the driving time condition is not fulfilled. In particular, we are considering an arterial along whose nodes separate left-turn signal groups are realized. In our computational results, we show that scheduling left-turn phases before or after the straight phases can reduce waiting times along the arterial. Moreover, we show that there is always a solution with green waves in both directions that are as long and as wide as possible, where absolute priority is put on just one direction. Compared to optimizing both directions together, establishing an ideal green wave into one direction can only provide suboptimal quality when considering prioritized parts of a green band (e.g., first few seconds).Keywords: traffic light coordination, synchronization, phase sequencing, green waves, integer programming
Procedia PDF Downloads 1141617 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.Keywords: wavelet transform, computational error, computational duration, strong ground motion data
Procedia PDF Downloads 3761616 Comparison of Efficacy between Low-Residue Diet and Clear-Liquid Diet in Colonoscopic Bowel Preparation at a Surgical Clinic: A Randomized Controlled Trial
Authors: Sopana Wongtawee
Abstract:
Purpose: Adequate bowel cleansing is essential for a high quality, effective and safe colonoscopy. The aims of this study were to compare the efficacy of bowel preparation based on a low-residue diet before 8:00 followed by a clear-liquid diet, and a low-residue diet until 16:00 one day before colonoscopy using sodium phosphate solution (Xubil ®), the side effects of the two protocols and the patient satisfaction with them. Method: This was an endoscopist-blinded, prospective, randomized, controlled trial. A total of 224 patients (112 in each group) scheduled for outpatient colonoscopy met the criteria.They were randomized to either a low-residue diet consisting of white rice porridge with either fish, chicken or eggs before 8:00 followed by a clear-liquid diet (Group 1) or a low-residue diet consisting of the same food and drink, until 16:00 the day before colonoscopy(Group 2). All of them received 45 ml of sodium phosphate solution (Xubil ®) and three glasses of water (300 ml/glass) the evening before and the morning of the procedure. The cleansing efficacy of bowel preparation was rated according to the modified Rajawithi hospital bowel preparation score scale, patient satisfaction with bowel preparation was rated using Likert scale, and side effects of the 2 protocols was assessed using a patient questionnaire. Results: The cleansing efficacy between the two groups was significantly different (p=0.02). Satisfaction with bowel preparation and side effects were not different, except for the feeling of hunger in the first group (p=0.001). Conclusion: The low-residue diet consisting of white rice porridge with fish, chicken or eggs until 16:00 one day before colonoscopy achieved a better bowel-cleansing efficacy than the protocol consisting of clear liquid all day and rice porridge only before 8:00 one day before colonoscopy.Keywords: bowel preparation, colonoscopy, sodium phosphate solution, nursing management
Procedia PDF Downloads 3911615 Optimization of Poly-β-Hydroxybutyrate Recovery from Bacillus Subtilis Using Solvent Extraction Process by Response Surface Methodology
Authors: Jayprakash Yadav, Nivedita Patra
Abstract:
Polyhydroxybutyrate (PHB) is an interesting material in the field of medical science, pharmaceutical industries, and tissue engineering because of its properties such as biodegradability, biocompatibility, hydrophobicity, and elasticity. PHB is naturally accumulated by several microbes in their cytoplasm during the metabolic process as energy reserve material. PHB can be extracted from cell biomass using halogenated hydrocarbons, chemicals, and enzymes. In this study, a cheaper and non-toxic solvent, acetone, was used for the extraction process. The different parameters like acetone percentage, and solvent pH, process temperature, and incubation periods were optimized using the Response Surface Methodology (RSM). RSM was performed and the determination coefficient (R2) value was found to be 0.8833 from the quadratic regression model with no significant lack of fit. The designed RSM model results indicated that the fitness of the response variable was significant (P-value < 0.0006) and satisfactory to denote the relationship between the responses in terms of PHB recovery and purity with respect to the values of independent variables. Optimum conditions for the maximum PHB recovery and purity were found to be solvent pH 7, extraction temperature - 43 °C, incubation time - 70 minutes, and percentage acetone – 30 % from this study. The maximum predicted PHB recovery was found to be 0.845 g/g biomass dry cell weight and the purity was found to be 97.23 % using the optimized conditions.Keywords: acetone, PHB, RSM, halogenated hydrocarbons, extraction, bacillus subtilis.
Procedia PDF Downloads 4381614 A Generic Middleware to Instantly Sync Intensive Writes of Heterogeneous Massive Data via Internet
Authors: Haitao Yang, Zhenjiang Ruan, Fei Xu, Lanting Xia
Abstract:
Industry data centers often need to sync data changes reliably and instantly from a large-scale of heterogeneous autonomous relational databases accessed via the not-so-reliable Internet, for which a practical universal sync middle of low maintenance and operation costs is most wanted, but developing such a product and adapting it for various scenarios are a very sophisticated and continuous practice. The authors have been devising, applying, and optimizing a generic sync middleware system, named GSMS since 2006, holding the principles or advantages that the middleware must be SyncML-compliant and transparent to data application layer logic, need not refer to implementation details of databases synced, does not rely on host computer operating systems deployed, and its construction is light weighted and hence, of low cost. A series of ultimate experiments with GSMS sync performance were conducted for a persuasive example of a source relational database that underwent a broad range of write loads, say, from one thousand to one million intensive writes within a few minutes. The tests proved that GSMS has achieved an instant sync level of well below a fraction of millisecond per record sync, and GSMS’ smooth performances under ultimate write loads also showed it is feasible and competent.Keywords: heterogeneous massive data, instantly sync intensive writes, Internet generic middleware design, optimization
Procedia PDF Downloads 1191613 The Effect of Ingredients Mixing Sequence in Rubber Compounding on the Formation of Bound Rubber and Cross-Link Density of Natural Rubber
Authors: Abu Hasan, Rochmadi, Hary Sulistyo, Suharto Honggokusumo
Abstract:
This research purpose is to study the effect of Ingredients mixing sequence in rubber compounding onto the formation of bound rubber and cross link density of natural rubber and also the relationship of bound rubber and cross link density. Analysis of bound rubber formation of rubber compound and cross link density of rubber vulcanizates were carried out on a natural rubber formula having masticated and mixing, followed by curing. There were four methods of mixing and each mixing process was followed by four mixing sequence methods of carbon black into the rubber. In the first method of mixing sequence, rubber was masticated for 5 min and then rubber chemicals and carbon black N 330 were added simultaneously. In the second one, rubber was masticated for 1 min and followed by addition of rubber chemicals and carbon black N 330 simultaneously using the different method of mixing then the first one. In the third one, carbon black N 660 was used for the same mixing procedure of the second one, and in the last one, rubber was masticated for 3 min, carbon black N 330 and rubber chemicals were added subsequently. The addition of rubber chemicals and carbon black into masticated rubber was distinguished by the sequence and time allocated for each mixing process. Carbon black was added into two stages. In the first stage, 10 phr was added first and the remaining 40 phr was added later along with oil. In the second one to the fourth one, the addition of carbon black in the first and the second stage was added in the phr ratio 20:30, 30:20, and 40:10. The results showed that the ingredients mixing process influenced bound rubber formation and cross link density. In the three methods of mixing, the bound rubber formation was proportional with crosslink density. In contrast in the fourth one, bound rubber formation and cross link density had contradictive relation. Regardless of the mixing method operated, bound rubber had non linear relationship with cross link density. The high cross link density was formed when low bound rubber formation. The cross link density became constant at high bound rubber content.Keywords: bound-rubber, cross-link density, natural rubber, rubber mixing process
Procedia PDF Downloads 4091612 Formex Algebra Adaptation into Parametric Design Tools: Dome Structures
Authors: Réka Sárközi, Péter Iványi, Attila B. Széll
Abstract:
The aim of this paper is to present the adaptation of the dome construction tool for formex algebra to the parametric design software Grasshopper. Formex algebra is a mathematical system, primarily used for planning structural systems such like truss-grid domes and vaults, together with the programming language Formian. The goal of the research is to allow architects to plan truss-grid structures easily with parametric design tools based on the versatile formex algebra mathematical system. To produce regular structures, coordinate system transformations are used and the dome structures are defined in spherical coordinate system. Owing to the abilities of the parametric design software, it is possible to apply further modifications on the structures and gain special forms. The paper covers the basic dome types, and also additional dome-based structures using special coordinate-system solutions based on spherical coordinate systems. It also contains additional structural possibilities like making double layer grids in all geometry forms. The adaptation of formex algebra and the parametric workflow of Grasshopper together give the possibility of quick and easy design and optimization of special truss-grid domes.Keywords: parametric design, structural morphology, space structures, spherical coordinate system
Procedia PDF Downloads 2531611 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network
Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy
Abstract:
The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation
Procedia PDF Downloads 691610 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches
Authors: Vahid Nourani, Atefeh Ashrafi
Abstract:
Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant
Procedia PDF Downloads 1281609 Qualitative Detection of HCV and GBV-C Co-infection in Cirrhotic Patients Using a SYBR Green Multiplex Real Time RT-PCR Technique
Authors: Shahzamani Kiana, Esmaeil Lashgarian Hamed, Merat Shahin
Abstract:
HCV and GBV-C belong to the Flaviviridae family of viruses and GBV-C is the closest virus to HCV genetically. Accumulative research is in progress all over the world to clarify clinical aspects of GBV-C. Possibility of interaction between HCV and GBV-C and also its consequence with other liver diseases are the most important clinical aspects which encourage researchers to develop a technique for simultaneous detection of these viruses. In this study a SYBR Green multiplex real time RT-PCR technique as a new economical and sensitive method was optimized for simultaneous detection of HCV/GBV-C in HCV positive plasma samples. After designing and selection of two pairs of specific primers for HCV and GBV-C, SYBR Green Real time RT-PCR technique optimization was performed separately for each virus. Establishment of multiplex PCR was the next step. Finally our technique was performed on positive and negative plasma samples. 89 cirrhotic HCV positive plasma samples (29 of genotype 3 a and 27 of genotype 1a) were collected from patients before receiving treatment. 14% of genotype 3a and 17.1% of genotype 1a showed HCV/GBV-C co-infection. As a result, 13.48% of 89 samples had HCV/GBV-C co-infection that was compatible with other results from all over the world. Data showed no apparent influence of HGV co-infection on the either clinical or virological aspect of HCV infection. Furthermore, with application of multiplex Real time RT-PCR technique, more time and cost could be saved in clinical-research settings.Keywords: HCV, GBV-C, cirrhotic patients, multiplex real time RT- PCR
Procedia PDF Downloads 2941608 Recommended Practice for Experimental Evaluation of the Seepage Sensitivity Damage of Coalbed Methane Reservoirs
Authors: Hao Liu, Lihui Zheng, Chinedu J. Okere, Chao Wang, Xiangchun Wang, Peng Zhang
Abstract:
The coalbed methane (CBM) extraction industry (an unconventional energy source) is yet to promulgated an established standard code of practice for the experimental evaluation of sensitivity damage of coal samples. The existing experimental process of previous researches mainly followed the industry standard for conventional oil and gas reservoirs (CIS). However, the existing evaluation method ignores certain critical differences between CBM reservoirs and conventional reservoirs, which could inevitably result in an inaccurate evaluation of sensitivity damage and, eventually, poor decisions regarding the formulation of formation damage prevention measures. In this study, we propose improved experimental guidelines for evaluating seepage sensitivity damage of CBM reservoirs by leveraging on the shortcomings of the existing methods. The proposed method was established via a theoretical analysis of the main drawbacks of the existing methods and validated through comparative experiments. The results show that the proposed evaluation technique provided reliable experimental results that can better reflect actual reservoir conditions and correctly guide future development of CBM reservoirs. This study is pioneering the research on the optimization of experimental parameters for efficient exploration and development of CBM reservoirs.Keywords: coalbed methane, formation damage, permeability, unconventional energy source
Procedia PDF Downloads 1251607 Land Use Planning Tool to Achieve Land Degradation Neutrality: Tunisia Case Study
Authors: Rafla Attia, Claudio Zucca, Bao Quang Le, Sana Dridi, Thouraya Sahli, Taoufik Hermassi
Abstract:
In Tunisia, landscape change and land degradation are critical issues for landscape conservation, management, and planning. Landscapes are undergoing crucial environmental problems made evident by soil degradation and desertification. Human improper uses of land resources (e.g., unsuitable land uses, unsustainable crop intensification, and poor rangeland management) and climate change are the main factors leading to the landscape transformation and desertification affecting high proportions of the Tunisian lands. Land use planning (LUP) to achieve Land Degradation Neutrality (LDN) must be supported by methodologies and technologies that help identify best solutions and practices and design context-specific sustainable land management (SLM) strategies. Such strategies must include restoration or rehabilitation efforts in areas with high land degradation, as well as prevention of degradation that could be caused by improper land use (LU) and land management (LM). The geoinformatics Land Use Planning for LDN (LUP4LDN) tool has been designed for this purpose. Its aim is to support national and sub-national planners in i) mapping geographic patterns of current land degradation; ii) anticipating further future land degradation expected in areas that are unsustainably managed; and iii) providing an interactive procedure for developing participatory LU-LM transitional scenarios over selected regions of interest and timeframes, visualizing the related expected levels of impacts on ecosystem services via maps and graphs. The tool has been co-developed and piloted with national stakeholders in Tunisia. The piloting implementation assessed how the LUP4LDN tool fits with existing LUP processes and the benefits achieved by using the tool to support land use planning for LDN.Keywords: land use system, land cover, sustainable land management, land use planning for land degradation neutrality
Procedia PDF Downloads 751606 Optimization of Wire EDM Parameters for Fabrication of Micro Channels
Authors: Gurinder Singh Brar, Sarbjeet Singh, Harry Garg
Abstract:
Wire Electric Discharge Machining (WEDM) is thermal machining process capable of machining very hard electrically conductive material irrespective of their hardness. WEDM is being widely used to machine micro-scale parts with the high dimensional accuracy and surface finish. The objective of this paper is to optimize the process parameters of wire EDM to fabricate the microchannels and to calculate the surface finish and material removal rate of microchannels fabricated using wire EDM. The material used is aluminum 6061 alloy. The experiments were performed using CNC wire cut electric discharge machine. The effect of various parameters of WEDM like pulse on time (TON) with the levels (100, 150, 200), pulse off time (TOFF) with the levels (25, 35, 45) and current (IP) with the levels (105, 110, 115) were investigated to study the effect on output parameter i.e. Surface Roughness and Material Removal Rate (MRR). Each experiment was conducted under different conditions of a pulse on time, pulse off time and peak current. For material removal rate, TON and Ip were the most significant process parameter. MRR increases with the increase in TON and Ip and decreases with the increase in TOFF. For surface roughness, TON and Ip have the maximum effect and TOFF was found out to be less effective.Keywords: microchannels, Wire Electric Discharge Machining (WEDM), Metal Removal Rate (MRR), surface finish
Procedia PDF Downloads 4981605 Cognitive Approach at the Epicenter of Creative Accounting in Cameroonian Companies: The Relevance of the Psycho-Sociological Approach and the Theory of Cognitive Dissonance
Authors: Romuald Temomo Wamba, Robert Wanda
Abstract:
The issue of creative accounting in the psychological and sociological framework has been a mixed subject for over 60 years. The objective of this article is to ensure the existence of creative accounting in Cameroonian entities on the one hand and to understand the strategies used by audit agents to detect errors, omissions, irregularities, or inadequacies in the financial state; optimization techniques used by account preparers to strategically bypass texts on the other hand. To achieve this, we conducted an exploratory study using a cognitive approach, and the data analysis was performed by the software 'decision explorer'. The results obtained challenge the authors' cognition (manifest latent and deceptive behavior). The tax inspectors stress that the entities in Cameroon do not derogate from the rules of piloting in the financial statements. Likewise, they claim a change in current income and net income through depreciation, provisions, inventories, and the spreading of charges over long periods. This suggests the suspicion or intention of manipulating the financial statements. As for the techniques, the account preparers manage the accruals at the end of the year as the basis of the practice of creative accounting. Likewise, management accounts are more favorable to results management.Keywords: creative accounting, sociocognitive approach, psychological and sociological approach, cognitive dissonance theory, cognitive mapping
Procedia PDF Downloads 1911604 Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers
Authors: Liang Ming Zhong, Yu Zhong, Wen Zhong, Fei Fei Yin
Abstract:
Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)Keywords: trinary affinity, difference, similarity, realistic zero
Procedia PDF Downloads 2101603 Laser Writing on Vitroceramic Disks for Petabyte Data Storage
Authors: C. Busuioc, S. I. Jinga, E. Pavel
Abstract:
The continuous need of more non-volatile memories with a higher storage capacity, smaller dimensions and weight, as well as lower costs, has led to the exploration of optical lithography on active media, as well as patterned magnetic composites. In this context, optical lithography is a technique that can provide a significant decrease of the information bit size to the nanometric scale. However, there are some restrictions that arise from the need of breaking the optical diffraction limit. Major achievements have been obtained by employing a vitoceramic material as active medium and a laser beam operated at low power for the direct writing procedure. Thus, optical discs with ultra-high density were fabricated by a conventional melt-quenching method starting from analytical purity reagents. They were subsequently used for 3D recording based on their photosensitive features. Naturally, the next step consists in the elucidation of the composition and structure of the active centers, in correlation with the use of silver and rare-earth compounds for the synthesis of the optical supports. This has been accomplished by modern characterization methods, namely transmission electron microscopy coupled with selected area electron diffraction, scanning transmission electron microscopy and electron energy loss spectroscopy. The influence of laser diode parameters, silver concentration and fluorescent compounds formation on the writing process and final material properties was investigated. The results indicate performances in terms of capacity with two order of magnitude higher than other reported information storage systems. Moreover, the fluorescent photosensitive vitroceramics may be integrated in other applications which appeal to nanofabrication as the driving force in electronics and photonics fields.Keywords: data storage, fluorescent compounds, laser writing, vitroceramics
Procedia PDF Downloads 224