Search results for: optimal rate of convergence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10978

Search results for: optimal rate of convergence

5098 Optimization of Process Parameters for Copper Extraction from Wastewater Treatment Sludge by Sulfuric Acid

Authors: Usarat Thawornchaisit, Kamalasiri Juthaisong, Kasama Parsongjeen, Phonsiri Phoengchan

Abstract:

In this study, sludge samples that were collected from the wastewater treatment plant of a printed circuit board manufacturing industry in Thailand were subjected to acid extraction using sulfuric acid as the chemical extracting agent. The effects of sulfuric acid concentration (A), the ratio of a volume of acid to a quantity of sludge (B) and extraction time (C) on the efficiency of copper extraction were investigated with the aim of finding the optimal conditions for maximum removal of copper from the wastewater treatment sludge. Factorial experimental design was employed to model the copper extraction process. The results were analyzed statistically using analysis of variance to identify the process variables that were significantly affected the copper extraction efficiency. Results showed that all linear terms and an interaction term between volume of acid to quantity of sludge ratio and extraction time (BC), had statistically significant influence on the efficiency of copper extraction under tested conditions in which the most significant effect was ascribed to volume of acid to quantity of sludge ratio (B), followed by sulfuric acid concentration (A), extraction time (C) and interaction term of BC, respectively. The remaining two-way interaction terms, (AB, AC) and the three-way interaction term (ABC) is not statistically significant at the significance level of 0.05. The model equation was derived for the copper extraction process and the optimization of the process was performed using a multiple response method called desirability (D) function to optimize the extraction parameters by targeting maximum removal. The optimum extraction conditions of 99% of copper were found to be sulfuric acid concentration: 0.9 M, ratio of the volume of acid (mL) to the quantity of sludge (g) at 100:1 with an extraction time of 80 min. Experiments under the optimized conditions have been carried out to validate the accuracy of the Model.

Keywords: acid treatment, chemical extraction, sludge, waste management

Procedia PDF Downloads 184
5097 [Keynote Talk]: Determination of the Quality of the Machined Surface Using Fuzzy Logic

Authors: Dejan Tanikić, Jelena Đoković, Saša Kalinović, Miodrag Manić, Saša Ranđelović

Abstract:

This paper deals with measuring and modelling of the quality of the machined surface of the metal machining process. The average surface roughness (Ra) which represents the quality of the machined part was measured during the dry turning of the AISI 4140 steel. A large number of factors with the unknown relations among them influences this parameter, and that is why mathematical modelling is extremely complicated. Different values of cutting speed, feed rate, depth of cut (cutting regime) and workpiece hardness causes different surface roughness values. Modelling with soft computing techniques may be very useful in such cases. This paper presents the usage of the fuzzy logic-based system for determining metal machining process parameter in order to find the proper values of cutting regimes.

Keywords: fuzzy logic, metal machining, process modeling, surface roughness

Procedia PDF Downloads 149
5096 A Research and Application of Feature Selection Based on IWO and Tabu Search

Authors: Laicheng Cao, Xiangqian Su, Youxiao Wu

Abstract:

Feature selection is one of the important problems in network security, pattern recognition, data mining and other fields. In order to remove redundant features, effectively improve the detection speed of intrusion detection system, proposes a new feature selection method, which is based on the invasive weed optimization (IWO) algorithm and tabu search algorithm(TS). Use IWO as a global search, tabu search algorithm for local search, to improve the results of IWO algorithm. The experimental results show that the feature selection method can effectively remove the redundant features of network data information in feature selection, reduction time, and to guarantee accurate detection rate, effectively improve the speed of detection system.

Keywords: intrusion detection, feature selection, iwo, tabu search

Procedia PDF Downloads 513
5095 Comparison of the Chest X-Ray and Computerized Tomography Scans Requested from the Emergency Department

Authors: Sahabettin Mete, Abdullah C. Hocagil, Hilal Hocagil, Volkan Ulker, Hasan C. Taskin

Abstract:

Objectives and Goals: An emergency department is a place where people can come for a multitude of reasons 24 hours a day. As it is an easy, accessible place, thanks to self-sacrificing people who work in emergency departments. But the workload and overcrowding of emergency departments are increasing day by day. Under these circumstances, it is important to choose a quick, easily accessible and effective test for diagnosis. This results in laboratory and imaging tests being more than 40% of all emergency department costs. Despite all of the technological advances in imaging methods and available computerized tomography (CT), chest X-ray, the older imaging method, has not lost its appeal and effectiveness for nearly all emergency physicians. Progress in imaging methods are very convenient, but physicians should consider the radiation dose, cost, and effectiveness, as well as imaging methods to be carefully selected and used. The aim of the study was to investigate the effectiveness of chest X-ray in immediate diagnosis against the advancing technology by comparing chest X-ray and chest CT scan results of the patients in the emergency department. Methods: Patients who applied to Bulent Ecevit University Faculty of Medicine’s emergency department were investigated retrospectively in between 1 September 2014 and 28 February 2015. Data were obtained via MIAMED (Clear Canvas Image Server v6.2, Toronto, Canada), information management system which patients’ files are saved electronically in the clinic, and were retrospectively scanned. The study included 199 patients who were 18 or older, had both chest X-ray and chest CT imaging. Chest X-ray images were evaluated by the emergency medicine senior assistant in the emergency department, and the findings were saved to the study form. CT findings were obtained from already reported data by radiology department in the clinic. Chest X-ray was evaluated with seven questions in terms of technique and dose adequacy. Patients’ age, gender, application complaints, comorbid diseases, vital signs, physical examination findings, diagnosis, chest X-ray findings and chest CT findings were evaluated. Data saved and statistical analyses have made via using SPSS 19.0 for Windows. And the value of p < 0.05 were accepted statistically significant. Results: 199 patients were included in the study. In 38,2% (n=76) of all patients were diagnosed with pneumonia and it was the most common diagnosis. The chest X-ray imaging technique was appropriate in patients with the rate of 31% (n=62) of all patients. There was not any statistically significant difference (p > 0.05) between both imaging methods (chest X-ray and chest CT) in terms of determining the rates of displacement of the trachea, pneumothorax, parenchymal consolidation, increased cardiothoracic ratio, lymphadenopathy, diaphragmatic hernia, free air levels in the abdomen (in sections including the image), pleural thickening, parenchymal cyst, parenchymal mass, parenchymal cavity, parenchymal atelectasis and bone fractures. Conclusions: When imaging findings, showing cases that needed to be quickly diagnosed, were investigated, chest X-ray and chest CT findings were matched at a high rate in patients with an appropriate imaging technique. However, chest X-rays, evaluated in the emergency department, were frequently taken with an inappropriate technique.

Keywords: chest x-ray, chest computerized tomography, chest imaging, emergency department

Procedia PDF Downloads 174
5094 Viability of Eggshells Ash Affecting the Setting Time of Cement

Authors: Fazeera Ujin, Kamran Shavarebi Ali, Zarina Yasmin Hanur Harith

Abstract:

This research paper reports on the feasibility and viability of eggshells ash and its effects on the water content and setting time of cement. An experiment was carried out to determine the quantity of water required in order to follow standard cement paste of normal consistency in accordance with MS EN 196-3:2007. The eggshells ash passing the 90µm sieve was used in the investigation. Eggshells ash with percentage of 0%, 0.1%, 0.5%, 1.0%, 1.5% and 2.0% were constituted to replace the cement. Chemical properties of both eggshells ash and cement are compared. From the results obtained, both eggshells ash and cement have the same chemical composition and primary composition which is the calcium compounds. Results from the setting time show that by adding the eggshells ash to the cement, the setting time of the cement decreases. In short, the higher amount of eggshells ash, the faster the rate of setting and apply to all percentage of eggshells ash that were used in this investigation. Both initial and final setting times fulfill the setting time requirements by Malaysian Standard. Hence, it is suggested that eggshells ash can be used as an admixture in concrete mix.

Keywords: construction materials, eggshells ash, solid waste, setting time

Procedia PDF Downloads 377
5093 Implied Adjusted Volatility by Leland Option Pricing Models: Evidence from Australian Index Options

Authors: Mimi Hafizah Abdullah, Hanani Farhah Harun, Nik Ruzni Nik Idris

Abstract:

With the implied volatility as an important factor in financial decision-making, in particular in option pricing valuation, and also the given fact that the pricing biases of Leland option pricing models and the implied volatility structure for the options are related, this study considers examining the implied adjusted volatility smile patterns and term structures in the S&P/ASX 200 index options using the different Leland option pricing models. The examination of the implied adjusted volatility smiles and term structures in the Australian index options market covers the global financial crisis in the mid-2007. The implied adjusted volatility was found to escalate approximately triple the rate prior the crisis.

Keywords: implied adjusted volatility, financial crisis, Leland option pricing models, Australian index options

Procedia PDF Downloads 361
5092 Digital Environment as a Factor of the City's Competitiveness in Attracting Tourists: The Case of Yekaterinburg

Authors: Alexander S. Burnasov, Anatoly V. Stepanov, Maria Y. Ilyushkina

Abstract:

In the conditions of transition to the digital economy, the digital environment of the city becomes one of the key factors of its tourism attractiveness. Modern digital environment makes travelling more accessible, improves the quality of travel services and the attractiveness of many tourist destinations. The digitalization of the industry allows to use resources more efficiently, to simplify business processes, to minimize risks, and to improve travel safety. The city promotion as a tourist destination in the foreign market becomes decisive in the digital environment. Information technologies are extremely important for the functioning of not only any tourist enterprise but also the city as a whole. In addition to solving traditional problems, it is also possible to implement some innovations from the tourism industry, such as the availability of city services in international systems of booking tickets and booking rooms in hotels, the possibility of early booking of theater and museum tickets, the possibility of non-cash payment by cards of international payment systems, Internet access in the urban environment for travelers. The availability of the city's digital services makes it possible to reduce ordering costs, contributes to the optimal selection of tourist products that meet the requirements of the tourist, provides increased transparency of transactions. The users can compare prices, features, services, and reviews of the travel service. The ability to share impressions with friends thousands of miles away directly affects the image of the city. It is possible to promote the image of the city in the digital environment not only through world-scale events (such as World Cup 2018, international summits, etc.) but also through the creation and management of services in the digital environment aimed at supporting tourism services, which will help to improve the positioning of the city in the global tourism market.

Keywords: competitiveness, digital environment, travelling, Yekaterinburg

Procedia PDF Downloads 119
5091 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys

Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio

Abstract:

Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.

Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling

Procedia PDF Downloads 208
5090 Analysis of Real Time Seismic Signal Dataset Using Machine Learning

Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.

Abstract:

Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.

Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection

Procedia PDF Downloads 102
5089 Characterization of N+C, Ti+N and Ti+C Ion Implantation into Ti6Al4V Alloy

Authors: Xingguo Feng, Hui Zhou, Kaifeng Zhang, Zhao Jiang, Hanjun Hu, Jun Zheng, Hong Hao

Abstract:

TiN and TiC films have been prepared on Ti6Al4V alloy substrates by plasma-based ion implantation. The effect of N+C and Ti+N hybrid ion implantation at 50 kV, and Ti+C hybrid ion implantation at 20 kV, 35 kV and 50 kV extraction voltages on mechanical properties at a dose of 2×10¹⁷ ions / cm² was studied. The chemical states and microstructures of the implanted samples were investigated using X-ray photoelectron (XPS), and X-ray diffraction (XRD), together with the mechanical and tribological properties of the samples were characterized using nano-indentation and ball-on-disk tribometer. It was found that the modified layer by Ti+C implanted at 50 kV was composed of mainly TiC and Ti-O bond and the layer of Ti+N implanted at 50 kV was observed to be TiN and Ti-O bond. Hardness tests have shown that the hardness values for N+C, Ti+N, and Ti+C hybrid ion implantation samples were much higher than the un-implanted ones. The results of wear tests showed that both Ti+C and Ti+N ion implanted samples had much better wear resistance compared un-implanted sample. The wear rate of Ti+C implanted at 50 kV sample was 6.7×10⁻⁵mm³ / N.m, which was decreased over one order than unimplanted samples.

Keywords: plasma ion implantation, x-ray photoelectron (XPS), hardness, wear

Procedia PDF Downloads 394
5088 Assessing the Ways of Improving the Power Saving Modes in the Ore-Grinding Technological Process

Authors: Baghdasaryan Marinka

Abstract:

Monitoring the distribution of electric power consumption in the technological process of ore grinding is conducted. As a result, the impacts of the mill filling rate, the productivity of the ore supply, the volumetric density of the grinding balls, the specific density of the ground ore, and the relative speed of the mill rotation on the specific consumption of electric power have been studied. The power and technological factors affecting the reactive power generated by the synchronous motors, operating within the technological scheme are studied. A block diagram for evaluating the power consumption modes of the technological process is presented, which includes the analysis of the technological scheme, the determination of the place and volumetric density of the ore-grinding mill, the evaluation of the technological and power factors affecting the energy saving process, as well as the assessment of the electric power standards.

Keywords: electric power standard, factor, ore grinding, power consumption, reactive power, technological

Procedia PDF Downloads 540
5087 Midterm Clinical and Functional Outcomes After Treatment with Ponseti Method for Idiopathic Clubfeet: A Prospective Cohort Study

Authors: Neeraj Vij, Amber Brennan, Jenni Winters, Hadi Salehi, Hamy Temkit, Emily Andrisevic, Mohan V. Belthur

Abstract:

Idiopathic clubfoot is a common lower extremity deformity with an incidence of 1:500. The Ponseti Method is well known as the gold standard of treatment. However, there is limited functional data demonstrating correction of the clubfoot after treatment with the Ponseti method. The purpose of this study was to study the clinical and functional outcomes after the Ponseti method with the Clubfoot Disease-Specific Instrument (CDS) and pedobarography. This IRB-approved prospective study included patients aged 3-18 who were treated for idiopathic clubfoot with the Ponseti method between January 2008 and December 2018. Age-matched controls were identified through siblings of clubfoot patients and other community members. Treatment details were collected through a chart review of the included patients. Laboratory assessment included a physical exam, gait analysis, and pedobarography. The Pediatric Outcomes Data Collection Instrument and the Clubfoot Disease-Specific Instrument were also obtained on clubfoot patients (CF). The Wilcoxson rank-sum test was used to study differences between the CF patients and the typically developing (TD) patients. Statistical significance was set at p < 0.05. There were a total of 37 enrolled patients in our study. 21 were priorly treated for CF and 16 were TD. 94% of the CF patients had bilateral involvement. The age at the start of treatment was 29 days, the average total number of casts was seven to eight, and the average total number of casts after Achilles tenotomy was one. The reoccurrence rate was 25%, tenotomy was required in 94% of patients, and ≥1 tenotomy was required in 25% of patients. There were no significant differences between step length, step width, stride length, force-time integral, maximum peak pressure, foot progression angles, stance phase time, single-limb support time, double limb support time, and gait cycle time between children treated with the Ponseti method and typically developing children. The average post-treatment Pirani and Dimeglio scores were 5.50±0.58 and 15.29±1.58, respectively. The average post-treatment PODCI subscores were: Upper Extremity: 90.28, Transfers: 94.6, Sports: 86.81, Pain: 86.20, Happiness: 89.52, Global: 88.6. The average post-treatment Clubfoot Disease-Specific Instrument scores subscores were: Satisfaction: 73.93, Function: 80.32, Overall: 78.41. The Ponseti Method has a very high success rate and remains to be the gold standard in the treatment of idiopathic clubfoot. Timely management leads to good outcomes and a low need for repeated Achilles tenotomy. Children treated with the Ponseti method demonstrate good functional outcomes as measured through pedobarography. Pedobarography may have clinical utility in studying congenital foot deformities. Objective measures for hours of brace wear could represent an improvement in clubfoot care.

Keywords: functional outcomes, pediatric deformity, patient-reported outcomes, talipes equinovarus

Procedia PDF Downloads 63
5086 Determinants of Consultation Time at a Family Medicine Center

Authors: Ali Alshahrani, Adel Almaai, Saad Garni

Abstract:

Aim of the study: To explore duration and determinants of consultation time at a family medicine center. Methodology: This study was conducted at the Family Medicine Center in Ahad Rafidah City, at the southwestern part of Saudi Arabia. It was conducted on the working days of March 2013. Trained nurses helped in filling in the checklist. A total of 459 patients were included. A checklist was designed and used in this study. It included patient’s age, sex, diagnosis, type of visit, referral and its type, psychological problems and additional work-up. In addition, number of daily bookings, physician`s experience and consultation time. Results: More than half of patients (58.39%) had less than 10 minutes’ consultation (Mean+SD: 12.73+9.22 minutes). Patients treated by physicians with shortest experience (i.e., ≤5 years) had the longest consultation time while those who were treated with physicians with the longest experience (i.e., > 10 years) had the shortest consultation time (13.94±10.99 versus 10.79±7.28, p=0.011). Regarding patients’ diagnosis, those with chronic diseases had the longest consultation time (p<0.001). Patients who did not need referral had significantly shorter consultation time compared with those who had routine or urgent referral (11.91±8.42,14.60±9.03 and 22.42±14.81 minutes, respectively, p<0.001). Patients with associated psychological problems needed significantly longer consultation time than those without associated psychological problems (20.06±13.32 versus 12.45±8.93, p<0.001). Conclusions: The average length of consultation time at Ahad Rafidah Family Medicine Center is approximately 13 minutes. Less-experienced physicians tend to spend longer consultation times with patients. Referred patients, those with psychological problems, those with chronic diseases tend to have longer consultation time. Recommendations: Family physicians should be encouraged to keep their optimal consultation time. Booking an adequate number of patients per shift would allow the family physician to provide enough consultation time for each patient.

Keywords: consultation, quality, medicine, clinics

Procedia PDF Downloads 272
5085 Predicting Shot Making in Basketball Learnt Fromadversarial Multiagent Trajectories

Authors: Mark Harmon, Abdolghani Ebrahimi, Patrick Lucey, Diego Klabjan

Abstract:

In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain-specific knowledge. Although intuitive, recent work in deep learning has shown, this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multichannel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories, we use “fading.” We find that this approach is superior to a traditional FFN model. By using gradient ascent, we were able to discover what the CNN filters look for during training. Last, we find that a combined FFN+CNN is the best performing network with an error rate of 39%.

Keywords: basketball, computer vision, image processing, convolutional neural network

Procedia PDF Downloads 136
5084 A Knowledge-Based Development of Risk Management Approaches for Construction Projects

Authors: Masoud Ghahvechi Pour

Abstract:

Risk management is a systematic and regular process of identifying, analyzing and responding to risks throughout the project's life cycle in order to achieve the optimal level of elimination, reduction or control of risk. The purpose of project risk management is to increase the probability and effect of positive events and reduce the probability and effect of unpleasant events on the project. Risk management is one of the most fundamental parts of project management, so that unmanaged or untransmitted risks can be one of the primary factors of failure in a project. Effective risk management does not apply to risk regression, which is apparently the cheapest option of the activity. However, the main problem with this option is the economic sensitivity, because what is potentially profitable is by definition risky, and what does not pose a risk is economically interesting and does not bring tangible benefits. Therefore, in relation to the implemented project, effective risk management is finding a "middle ground" in its management, which includes, on the one hand, protection against risk from a negative direction by means of accurate identification and classification of risk, which leads to analysis And it becomes a comprehensive analysis. On the other hand, management using all mathematical and analytical tools should be based on checking the maximum benefits of these decisions. Detailed analysis, taking into account all aspects of the company, including stakeholder analysis, will allow us to add what will become tangible benefits for our project in the future to effective risk management. Identifying the risk of the project is based on the theory that which type of risk may affect the project, and also refers to specific parameters and estimating the probability of their occurrence in the project. These conditions can be divided into three groups: certainty, uncertainty, and risk, which in turn support three types of investment: risk preference, risk neutrality, specific risk deviation, and its measurement. The result of risk identification and project analysis is a list of events that indicate the cause and probability of an event, and a final assessment of its impact on the environment.

Keywords: risk, management, knowledge, risk management

Procedia PDF Downloads 48
5083 The Effect of Visual Fluency and Cognitive Fluency on Access Rates of Web Pages

Authors: Xiaoying Guo, Xiangyun Wang

Abstract:

Access rates is a key indicator of reflecting the popularity of web pages. Having high access rates are very important for web pages, especially for news web pages, online shopping sites and searching engines. In this paper, we analyzed the influences of visual fluency and cognitive fluency on access rates of Chinese web pages. Firstly, we conducted an experiment of scoring the web pages. Twenty-five subjects were invited to view top 50 web pages of China, and they were asked to give a score in a 5-point Likert-scale from four aspects, including complexity, comfortability, familiarity and usability. Secondly, the obtained results was analyzed by correlation analysis and factor analysis in R. By factor analysis; we analyzed the contributions of visual fluency and cognitive fluency to the access rates. The results showed that both visual fluency and cognitive fluency affect the access rate of web pages. Compared to cognitive fluency, visual fluency play a more important role in user’s accessing of web pages.

Keywords: visual fluency, cognitive fluency, visual complexity, usability

Procedia PDF Downloads 360
5082 Dewatering of Brewery Sludge through the Use of Biopolymers

Authors: Audrey Smith, M. Saifur Rahaman

Abstract:

The waste crisis has become a global issue, forcing many industries to reconsider their disposal methods and environmental practices. Sludge is a form of waste created in many fields, which include water and wastewater, pulp and paper, as well as from breweries. The composition of this sludge differs between sources and can, therefore, have varying disposal methods or future applications. When looking at the brewery industry, it produces a significant amount of sludge with a high water content. In order to avoid landfilling, this waste can further be processed into a valuable material. Specifically, the sludge must undergo dewatering, a process which typically involves the addition of coagulants like aluminum sulfate or ferric chloride. These chemicals, however, limit the potential uses of the sludge since it will contain traces of metals. In this case, the desired outcome of the brewery sludge would be to produce animal feed; however, these conventional coagulants would add a toxic component to the sludge. The use of biopolymers like chitosan, which act as a coagulant, can be used to dewater brewery sludge while allowing it to be safe for animal consumption. Chitosan is also a by-product created by the shellfish processing industry and therefore reduces the environmental imprint since it involves using the waste from one industry to treat the waste from another. In order to prove the effectiveness of this biopolymer, experiments using jar-tests will be utilised to determine the optimal dosages and conditions, while variances of contaminants like ammonium will also be observed. The efficiency of chitosan can also be compared to other polysaccharides to determine which is best suited for this waste. Overall a significant separation has been achieved between the solid and liquid content of the waste during the coagulation-flocculation process when applying chitosan. This biopolymer can, therefore, be used to dewater brewery sludge such that it can be repurposed as animal feed. The use of biopolymers can also be applied to treat sludge from other industries, which can reduce the amount of waste produced and allow for more diverse options for reuse.

Keywords: animal feed, biopolymer, brewery sludge, chitosan

Procedia PDF Downloads 139
5081 Optimizing Bridge Deck Construction: A deep Neural Network Approach for Limiting Exterior Grider Rotation

Authors: Li Hui, Riyadh Hindi

Abstract:

In the United States, bridge construction often employs overhang brackets to support the deck overhang, the weight of fresh concrete, and loads from construction equipment. This approach, however, can lead to significant torsional moments on the exterior girders, potentially causing excessive girder rotation. Such rotations can result in various safety and maintenance issues, including thinning of the deck, reduced concrete cover, and cracking during service. Traditionally, these issues are addressed by installing temporary lateral bracing systems and conducting comprehensive torsional analysis through detailed finite element analysis for the construction of bridge deck overhang. However, this process is often intricate and time-intensive, with the spacing between temporary lateral bracing systems usually relying on the field engineers’ expertise. In this study, a deep neural network model is introduced to limit exterior girder rotation during bridge deck construction. The model predicts the optimal spacing between temporary bracing systems. To train this model, over 10,000 finite element models were generated in SAP2000, incorporating varying parameters such as girder dimensions, span length, and types and spacing of lateral bracing systems. The findings demonstrate that the deep neural network provides an effective and efficient alternative for limiting the exterior girder rotation for bridge deck construction. By reducing dependence on extensive finite element analyses, this approach stands out as a significant advancement in improving safety and maintenance effectiveness in the construction of bridge decks.

Keywords: bridge deck construction, exterior girder rotation, deep learning, finite element analysis

Procedia PDF Downloads 46
5080 A Thorough Analysis of the Literature on the Airport Service Quality and Patron Satisfaction

Authors: Mohammed Saad Alanazi

Abstract:

Satisfaction of travelers with services provided in the airports is a sign of competitiveness and the corporate image of the airport. This study conducted a systematic literature review of recent studies published after 2017 regarding the factors that positively influence travelers’ satisfaction and encourage them to report positive reviews online. This study found variations among the studies found. They used several research methodologies, and datasets and focused on different airports, yet, they commonly categorized airport services into seven categories that should receive high intention because their qualities were found increasing review rate and positivity. It was found that studies targeting travelers’ satisfaction and intention of revisiting tended to use primary sources of data (survey); meanwhile, studies concerned positivity and negativity of comments towards airport services often used online reviews provided by travelers.

Keywords: business Intelligence, airport service quality, passenger satisfaction, thorough analysis

Procedia PDF Downloads 61
5079 The Complex Relationship Between IQ and Attention Deficit Hyperactivity Disorder Symptoms: Insights From Behaviors, Cognition, and Brain in 5,138 Children With Attention Deficit Hyperactivity Disorder

Authors: Ningning Liu, Gaoding Jia, Yinshan Wang, Haimei Li, Xinian Zuo, Yufeng Wang, Lu Liu, Qiujin Qian

Abstract:

Background: There has been speculation that a high IQ may not necessarily provide protection against attention deficit hyperactivity disorder (ADHD), and there may be a U-shaped correlation between IQ and ADHD symptoms. However, this speculation has not been validated in the ADHD population in any study so far. Method: We conducted a study with 5,138 children who have been professionally diagnosed with ADHD and have a wide range of IQ levels. General Linear Models were used to determine the optimal model between IQ and ADHD core symptoms with sex and age as covariates. The ADHD symptoms we looked at included the total scores (TO), inattention (IA) and hyperactivity/impulsivity (HI). Wechsler Intelligence scale were used to assess IQ [Full-Scale IQ (FSIQ), Verbal IQ (VIQ), and Performance IQ (PIQ)]. Furthermore, we examined the correlation between IQ and the execution function [Behavior Rating Inventory of Executive Function (BRIEF)], as well as between IQ and brain surface area, to determine if the associations between IQ and ADHD symptoms are reflected in executive functions and brain structure. Results: Consistent with previous research, the results indicated that FSIQ and VIQ both showed a linear negative correlation with the TO and IA scores of ADHD. However, PIQ showed an inverted U-shaped relationship with the TO and HI scores of ADHD, with 103 as the peak point. These findings were also partially reflected in the relationship between IQ and executive functions, as well as IQ and brain surface area. Conclusion: To sum up, the relationship between IQ and ADHD symptoms is not straightforward. Our study confirms long-standing academic hypotheses and finds that PIQ exhibits an inverted U-shaped relationship with ADHD symptoms. This study enhances our understanding of symptoms and behaviors of ADHD with varying IQ characteristics and provides some evidence for targeted clinical intervention.

Keywords: ADHD, IQ, execution function, brain imaging

Procedia PDF Downloads 48
5078 Breastfeeding in Childhood Asthma: A Boon or a Bane

Authors: Harish Peri, Amit Devgan

Abstract:

The aim of this study was to evaluate the impact of exclusive breastfeeding on asthma and lung function in childhood asthma. A case-control study comprising 80 cases (children with asthma) and 80 controls(children without asthma) in the age group 6-12 years were included. A diagnosis was made by the treating pediatrician. A parental questionnaire was given and data regarding the name, age, sex of the child, duration of asthma, whether breastfed or not, duration, exclusiveness of breastfeeding and maternal asthmatic status were collected. Peak Expiratory Flow Rate was measured for every child using a Peak Expiratory Flow Meter. Results showed Exclusively Breastfed children were found to better protected against asthma and have improved lung function as compared to Non-exclusively Breastfeed children, irrespective of the mother’s asthmatic status. This study demonstrated that exclusive breastfeeding has a protective action against childhood asthma.

Keywords: asthmatic mothers, childhood asthma, exclusive breastfeeding, non-asthmatic mothers

Procedia PDF Downloads 276
5077 Adaptive Design of Large Prefabricated Concrete Panels Collective Housing

Authors: Daniel M. Muntean, Viorel Ungureanu

Abstract:

More than half of the urban population in Romania lives today in residential buildings made out of large prefabricated reinforced concrete panels. Since their initial design was made in the 1960’s, these housing units are now being technically and morally outdated, consuming large amounts of energy for heating, cooling, ventilation and lighting, while failing to meet the needs of the contemporary life-style. Due to their widespread use, the design of a system that improves their energy efficiency would have a real impact, not only on the energy consumption of the residential sector, but also on the quality of life that it offers. Furthermore, with the transition of today’s existing power grid to a “smart grid”, buildings could become an active element for future electricity networks by contributing in micro-generation and energy storage. One of the most addressed issues today is to find locally adapted strategies that can be applied considering the 20-20-20 EU policy criteria and to offer sustainable and innovative solutions for the cost-optimal energy performance of buildings adapted on the existing local market. This paper presents a possible adaptive design scenario towards sustainable retrofitting of these housing units. The apartments are transformed in order to meet the current living requirements and additional extensions are placed on top of the building, replacing the unused roof space, acting not only as housing units, but as active solar energy collection systems. An adaptive building envelope is ensured in order to achieve overall air-tightness and an elevator system is introduced to facilitate access to the upper levels.

Keywords: adaptive building, energy efficiency, retrofitting, residential buildings, smart grid

Procedia PDF Downloads 284
5076 The Nexus of Federalism and Economic Development: A Politico-Economic Analysis of Balochistan, Pakistan

Authors: Rameesha Javaid

Abstract:

Balochistan, the largest landmass named after and dominated by the 55% Baloch population, which has had a difficult anti-center history like their brothers the Kurds of Middle East, reluctantly acceded to Pakistan in 1947. The region, which attained the status of a province after two decades of accession, has lagged behind in social development and economic growth as compared to the other three federating units. The province has seen the least financial autonomy and administrative decentralization both in autocratic and democratic dispensations under geostrategic and security considerations. Significant corrections have been recently made in the policy framework through changing the formula for intra-provincial National Finance Award, curtailing the number of subjects under federal control, and reactivating the Council of Common Interests. Yet policymaking remains overwhelmingly bureaucratic under a weak parliamentary oversight. The provincial coalition governments are unwieldy and directionless. The government machinery has much less than the optimal capability, character, integrity, will, and opportunity to perform. Decentralization further loses its semblance in the absence of local governments for long intervals and with the hold of hereditary tribal chiefs. Increased allocations failed to make an impact in the highest per capita cost environment due to long distances and scattered settlements. Decentralization, the basic ingredient of federalism has remained mortgaged to geo-strategic factors, internal security perceptions, autocratic and individualistic styles of governments, bureaucratic policymaking structures, bad governance, non-existent local governments, and feudalistic tribal lords. This suboptimal federalism speaks for the present underdevelopment in Balochistan and will earmark the milestones in the future.

Keywords: Balochistan, economic development, federalism, political economy

Procedia PDF Downloads 299
5075 Epileptic Seizure Prediction Focusing on Relative Change in Consecutive Segments of EEG Signal

Authors: Mohammad Zavid Parvez, Manoranjan Paul

Abstract:

Epilepsy is a common neurological disorders characterized by sudden recurrent seizures. Electroencephalogram (EEG) is widely used to diagnose possible epileptic seizure. Many research works have been devoted to predict epileptic seizure by analyzing EEG signal. Seizure prediction by analyzing EEG signals are challenging task due to variations of brain signals of different patients. In this paper, we propose a new approach for feature extraction based on phase correlation in EEG signals. In phase correlation, we calculate relative change between two consecutive segments of an EEG signal and then combine the changes with neighboring signals to extract features. These features are then used to classify preictal/ictal and interictal EEG signals for seizure prediction. Experiment results show that the proposed method carries good prediction rate with greater consistence for the benchmark data set in different brain locations compared to the existing state-of-the-art methods.

Keywords: EEG, epilepsy, phase correlation, seizure

Procedia PDF Downloads 297
5074 Recovery of Zn from Different Çinkur Leach Residues by Acidic Leaching

Authors: Mehmet Ali Topçu, Aydın Ruşen

Abstract:

Çinkur is the only plant in Turkey that produces zinc from primary ore containing zinc carbonate from its establishment until 1997. After this year, zinc concentrate coming from Iran was used in this plant. Therefore, there are two different leach residues namely Turkish leach residue (TLR) and Iranian leach residue (ILR), in Çinkur stock piles. This paper describes zinc recovery by sulphuric acid (H2SO4) treatment for each leach residue and includes comparison of blended of TLR and ILR. Before leach experiments; chemical, mineralogical and thermal analysis of three different leach residues was carried out by using atomic absorption spectrometry (AAS), X-Ray diffraction (XRD) and differential thermal analysis (DTA), respectively. Leaching experiments were conducted at optimum conditions; 100 oC, 150 g/L H2SO4 and 2 hours. In the experiments, stirring rate was kept constant at 600 r/min which ensures complete mixing in leaching solution. Results show that zinc recovery for Iranian LR was higher than Turkish LR due to having different chemical composition from each other.

Keywords: hydrometallurgy, leaching, metal extraction, metal recovery

Procedia PDF Downloads 339
5073 Secondary Radiation in Laser-Accelerated Proton Beamline (LAP)

Authors: Seyed Ali Mahdipour, Maryam Shafeei Sarvestani

Abstract:

Radiation pressure acceleration (RPA) and target normal sheath acceleration (TNSA) are the most important methods of Laser-accelerated proton beams (LAP) planning systems.LAP has inspired novel applications that can benefit from proton bunch properties different from conventionally accelerated proton beams. The secondary neutron and photon produced in the collision of protons with beamline components are of the important concern in proton therapy. Various published Monte Carlo researches evaluated the beamline and shielding considerations for TNSA method, but there is no studies directly address secondary neutron and photon production from RPA method in LAP. The purpose of this study is to calculate the flux distribution of neutron and photon secondary radiations on the first area ofLAP and to determine the optimize thickness and radius of the energyselector in a LAP planning system based on RPA method. Also, we present the Monte Carlo calculations to determine the appropriate beam pipe for shielding a LAP planning system. The GEANT4 Monte Carlo toolkit has been used to simulate a secondary radiation production in LAP. A section of new multifunctional LAP beamlinehas been proposed, based on the pulsed power solenoid scheme as a GEANT4 toolkit. The results show that the energy selector is the most important source of neutron and photon secondary particles in LAP beamline. According to the calculations, the pure Tungsten energy selector not be the proper case, and using of Tungsten+Polyethylene or Tungsten+Graphitecomposite selectors will reduce the production of neutron and photon intensities by approximately ~10% and ~25%, respectively. Also the optimal radiuses of energy selectors were found to be ~4 cm and ~6 cm for a 3 degree and 5 degree proton deviation angles, respectively.

Keywords: neutron, photon, flux distribution, energy selector, GEANT4 toolkit

Procedia PDF Downloads 88
5072 Performance Improvement of Cooperative Scheme in Wireless OFDM Systems

Authors: Ki-Ro Kim, Seung-Jun Yu, Hyoung-Kyu Song

Abstract:

Recently, the wireless communication systems are required to have high quality and provide high bit rate data services. Researchers have studied various multiple antenna scheme to meet the demand. In practical application, it is difficult to deploy multiple antennas for limited size and cost. Cooperative diversity techniques are proposed to overcome the limitations. Cooperative communications have been widely investigated to improve performance of wireless communication. Among diversity schemes, space-time block code has been widely studied for cooperative communication systems. In this paper, we propose a new cooperative scheme using pre-coding and space-time block code. The proposed cooperative scheme provides improved error performance than a conventional cooperative scheme using space-time block coding scheme.

Keywords: cooperative communication, space-time block coding, pre-coding

Procedia PDF Downloads 348
5071 Analysis of the Aquifer Vulnerability of a Miopliocene Arid Area Using Drastic and SI Models

Authors: H. Majour, L. Djabri

Abstract:

Many methods in the groundwater vulnerability have been developed in the world (methods like PRAST, DRIST, APRON/ARAA, PRASTCHIM, GOD). In this study, our choice dealt with two recent complementary methods using category mapping of index with weighting criteria (Point County Systems Model MSCP) namely the standard DRASTIC method and SI (Susceptibility Index). At present, these two methods are the most used for the mapping of the intrinsic vulnerability of groundwater. Two classes of groundwater vulnerability in the Biskra sandy aquifer were identified by the DRASTIC method (average and high) and the SI method (very high and high). Integrated analysis has revealed that the high class is predominant for the DRASTIC method whereas for that of SI the preponderance is for the very high class. Furthermore, we notice that the method SI estimates better the vulnerability for the pollution in nitrates, with a rate of 85 % between the concentrations in nitrates of groundwater and the various established classes of vulnerability, against 75 % for the DRASTIC method. By including the land use parameter, the SI method produced more realistic results.

Keywords: DRASTIC, SI, GIS, Biskra sandy aquifer, Algeria

Procedia PDF Downloads 471
5070 Mixed Treatment (Physical-Chemical and Biological) of Ouled Fayet Landfill Leachates

Authors: O. Balamane-Zizi, L. M. Rouidi, A. Boukhrissa, N. Daas, H. Ait-amar

Abstract:

The objective of this study was to test the possibility of a mixed treatment (physical-chemical and biological) of Ouled Fayet leachates which date of 10 years and has a large fraction of hard COD that can be reduced by coagulation-flocculation. Previous batch tests showed the possibility of applying the physical-chemical and biological treatments separately; the removal efficiencies obtained in this case were not interesting. We propose, therefore, to test the possibility of a combined treatment, in order to improve the quality of the leachates. Estimation of the treatment’s effectiveness was done by analysis of some pollution parameters such as COD, suspended solids, and heavy metals (particularly iron and nickel). The main results obtained after the combination of treatments, show reduction rate of about 63% for COD, 73% for suspended solids and 80% for iron and nickel. We also noted an improvement in the turbidity of treated leachates.

Keywords: landfill leachates, COD, physical-chemical treatment, biological treatment

Procedia PDF Downloads 459
5069 Identification of Key Parameters for Benchmarking of Combined Cycle Power Plants Retrofit

Authors: S. Sabzchi Asl, N. Tahouni, M. H. Panjeshahi

Abstract:

Benchmarking of a process with respect to energy consumption, without accomplishing a full retrofit study, can save both engineering time and money. In order to achieve this goal, the first step is to develop a conceptual-mathematical model that can easily be applied to a group of similar processes. In this research, we have aimed to identify a set of key parameters for the model which is supposed to be used for benchmarking of combined cycle power plants. For this purpose, three similar combined cycle power plants were studied. The results showed that ambient temperature, pressure and relative humidity, number of HRSG evaporator pressure levels and relative power in part load operation are the main key parameters. Also, the relationships between these parameters and produced power (by gas/ steam turbine), gas turbine and plant efficiency, temperature and mass flow rate of the stack flue gas were investigated.

Keywords: combined cycle power plant, energy benchmarking, modelling, retrofit

Procedia PDF Downloads 290