Search results for: modeling technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10195

Search results for: modeling technique

8125 Modeling of the Biodegradation Performance of a Membrane Bioreactor to Enhance Water Reuse in Agri-food Industry - Poultry Slaughterhouse as an Example

Authors: masmoudi Jabri Khaoula, Zitouni Hana, Bousselmi Latifa, Akrout Hanen

Abstract:

Mathematical modeling has become an essential tool for sustainable wastewater management, particularly for the simulation and the optimization of complex processes involved in activated sludge systems. In this context, the activated sludge model (ASM3h) was used for the simulation of a Biological Membrane Reactor (MBR) as it includes the integration of biological wastewater treatment and physical separation by membrane filtration. In this study, the MBR with a useful volume of 12.5 L was fed continuously with poultry slaughterhouse wastewater (PSWW) for 50 days at a feed rate of 2 L/h and for a hydraulic retention time (HRT) of 6.25h. Throughout its operation, High removal efficiency was observed for the removal of organic pollutants in terms of COD with 84% of efficiency. Moreover, the MBR has generated a treated effluent which fits with the limits of discharge into the public sewer according to the Tunisian standards which were set in March 2018. In fact, for the nitrogenous compounds, average concentrations of nitrate and nitrite in the permeat reached 0.26±0.3 mg. L-1 and 2.2±2.53 mg. L-1, respectively. The simulation of the MBR process was performed using SIMBA software v 5.0. The state variables employed in the steady state calibration of the ASM3h were determined using physical and respirometric methods. The model calibration was performed using experimental data obtained during the first 20 days of the MBR operation. Afterwards, kinetic parameters of the model were adjusted and the simulated values of COD, N-NH4+and N- NOx were compared with those reported from the experiment. A good prediction was observed for the COD, N-NH4+and N- NOx concentrations with 467 g COD/m³, 110.2 g N/m³, 3.2 g N/m³ compared to the experimental data which were 436.4 g COD/m³, 114.7 g N/m³ and 3 g N/m³, respectively. For the validation of the model under dynamic simulation, the results of the experiments obtained during the second treatment phase of 30 days were used. It was demonstrated that the model simulated the conditions accurately by yielding a similar pattern on the variation of the COD concentration. On the other hand, an underestimation of the N-NH4+ concentration was observed during the simulation compared to the experimental results and the measured N-NO3 concentrations were lower than the predicted ones, this difference could be explained by the fact that the ASM models were mainly designed for the simulation of biological processes in the activated sludge systems. In addition, more treatment time could be required by the autotrophic bacteria to achieve a complete and stable nitrification. Overall, this study demonstrated the effectiveness of mathematical modeling in the prediction of the performance of the MBR systems with respect to organic pollution, the model can be further improved for the simulation of nutrients removal for a longer treatment period.

Keywords: activated sludge model (ASM3h), membrane bioreactor (MBR), poultry slaughter wastewater (PSWW), reuse

Procedia PDF Downloads 64
8124 Modeling Depth Averaged Velocity and Boundary Shear Stress Distributions

Authors: Ebissa Gadissa Kedir, C. S. P. Ojha, K. S. Hari Prasad

Abstract:

In the present study, the depth-averaged velocity and boundary shear stress in non-prismatic compound channels with three different converging floodplain angles ranging from 1.43ᶱ to 7.59ᶱ have been studied. The analytical solutions were derived by considering acting forces on the channel beds and walls. In the present study, five key parameters, i.e., non-dimensional coefficient, secondary flow term, secondary flow coefficient, friction factor, and dimensionless eddy viscosity, were considered and discussed. An expression for non-dimensional coefficient and integration constants was derived based on the boundary conditions. The model was applied to different data sets of the present experiments and experiments from other sources, respectively, to examine and analyse the influence of floodplain converging angles on depth-averaged velocity and boundary shear stress distributions. The results show that the non-dimensional parameter plays important in portraying the variation of depth-averaged velocity and boundary shear stress distributions with different floodplain converging angles. Thus, the variation of the non-dimensional coefficient needs attention since it affects the secondary flow term and secondary flow coefficient in both the main channel and floodplains. The analysis shows that the depth-averaged velocities are sensitive to a shear stress-dependent model parameter non-dimensional coefficient, and the analytical solutions are well agreed with experimental data when five parameters are included. It is inferred that the developed model may facilitate the interest of others in complex flow modeling.

Keywords: depth-average velocity, converging floodplain angles, non-dimensional coefficient, non-prismatic compound channels

Procedia PDF Downloads 79
8123 Privacy Preserving in Association Rule Mining on Horizontally Partitioned Database

Authors: Manvar Sagar, Nikul Virpariya

Abstract:

The advancement in data mining techniques plays an important role in many applications. In context of privacy and security issues, the problems caused by association rule mining technique are investigated by many research scholars. It is proved that the misuse of this technique may reveal the database owner’s sensitive and private information to others. Many researchers have put their effort to preserve privacy in Association Rule Mining. Amongst the two basic approaches for privacy preserving data mining, viz. Randomization based and Cryptography based, the later provides high level of privacy but incurs higher computational as well as communication overhead. Hence, it is necessary to explore alternative techniques that improve the over-heads. In this work, we propose an efficient, collusion-resistant cryptography based approach for distributed Association Rule mining using Shamir’s secret sharing scheme. As we show from theoretical and practical analysis, our approach is provably secure and require only one time a trusted third party. We use secret sharing for privately sharing the information and code based identification scheme to add support against malicious adversaries.

Keywords: Privacy, Privacy Preservation in Data Mining (PPDM), horizontally partitioned database, EMHS, MFI, shamir secret sharing

Procedia PDF Downloads 413
8122 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil

Authors: M. Seguini, D. Nedjar

Abstract:

An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.

Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability

Procedia PDF Downloads 418
8121 Epileptic Seizure Onset Detection via Energy and Neural Synchronization Decision Fusion

Authors: Marwa Qaraqe, Muhammad Ismail, Erchin Serpedin

Abstract:

This paper presents a novel architecture for a patient-specific epileptic seizure onset detector using scalp electroencephalography (EEG). The proposed architecture is based on the decision fusion calculated from energy and neural synchronization related features. Specifically, one level of the detector calculates the condition number (CN) of an EEG matrix to evaluate the amount of neural synchronization present within the EEG channels. On a parallel level, the detector evaluates the energy contained in four EEG frequency subbands. The information is then fed into two independent (parallel) classification units based on support vector machines to determine the onset of a seizure event. The decisions from the two classifiers are then combined together according to two fusion techniques to determine a global decision. Experimental results demonstrate that the detector based on the AND fusion technique outperforms existing detectors with a sensitivity of 100%, detection latency of 3 seconds, while it achieves a 2:76 false alarm rate per hour. The OR fusion technique achieves a sensitivity of 100%, and significantly improves delay latency (0:17 seconds), yet it achieves 12 false alarms per hour.

Keywords: epilepsy, EEG, seizure onset, electroencephalography, neuron, detection

Procedia PDF Downloads 482
8120 A Phenomenological Approach to Computational Modeling of Analogy

Authors: José Eduardo García-Mendiola

Abstract:

In this work, a phenomenological approach to computational modeling of analogy processing is carried out. The paper goes through the consideration of the structure of the analogy, based on the possibility of sustaining the genesis of its elements regarding Husserl's genetic theory of association. Among particular processes which take place in order to get analogical inferences, there is one which arises crucial for enabling efficient base cases retrieval through long-term memory, namely analogical transference grounded on familiarity. In general, it has been argued that analogical reasoning is a way by which a conscious agent tries to determine or define a certain scope of objects and relationships between them using previous knowledge of other familiar domain of objects and relations. However, looking for a complete description of analogy process, a deeper consideration of phenomenological nature is required in so far, its simulation by computational programs is aimed. Also, one would get an idea of how complex it would be to have a fully computational account of the analogy elements. In fact, familiarity is not a result of a mere chain of repetitions of objects or events but generated insofar as the object/attribute or event in question is integrable inside a certain context that is taking shape as functionalities and functional approaches or perspectives of the object are being defined. Its familiarity is generated not by the identification of its parts or objective determinations as if they were isolated from those functionalities and approaches. Rather, at the core of such a familiarity between entities of different kinds lays the way they are functionally encoded. So, and hoping to make deeper inroads towards these topics, this essay allows us to consider that cognitive-computational perspectives can visualize, from the phenomenological projection of the analogy process reviewing achievements already obtained as well as exploration of new theoretical-experimental configurations towards implementation of analogy models in specific as well as in general purpose machines.

Keywords: analogy, association, encoding, retrieval

Procedia PDF Downloads 127
8119 A Single Feature Probability-Object Based Image Analysis for Assessing Urban Landcover Change: A Case Study of Muscat Governorate in Oman

Authors: Salim H. Al Salmani, Kevin Tansey, Mohammed S. Ozigis

Abstract:

The study of the growth of built-up areas and settlement expansion is a major exercise that city managers seek to undertake to establish previous and current developmental trends. This is to ensure that there is an equal match of settlement expansion needs to the appropriate levels of services and infrastructure required. This research aims at demonstrating the potential of satellite image processing technique, harnessing the utility of single feature probability-object based image analysis technique in assessing the urban growth dynamics of the Muscat Governorate in Oman for the period 1990, 2002 and 2013. This need is fueled by the continuous expansion of the Muscat Governorate beyond predicted levels of infrastructural provision. Landsat Images of the years 1990, 2002 and 2013 were downloaded and preprocessed to forestall appropriate radiometric and geometric standards. A novel approach of probability filtering of the target feature segment was implemented to derive the spatial extent of the final Built-Up Area of the Muscat governorate for the three years period. This however proved to be a useful technique as high accuracy assessment results of 55%, 70%, and 71% were recorded for the Urban Landcover of 1990, 2002 and 2013 respectively. Furthermore, the Normalized Differential Built – Up Index for the various images were derived and used to consolidate the results of the SFP-OBIA through a linear regression model and visual comparison. The result obtained showed various hotspots where urbanization have sporadically taken place. Specifically, settlement in the districts (Wilayat) of AL-Amarat, Muscat, and Qurayyat experienced tremendous change between 1990 and 2002, while the districts (Wilayat) of AL-Seeb, Bawshar, and Muttrah experienced more sporadic changes between 2002 and 2013.

Keywords: urban growth, single feature probability, object based image analysis, landcover change

Procedia PDF Downloads 277
8118 Optic Nerve Sheath Measurement in Children with Head Trauma

Authors: Sabiha Sahin, Kursad Bora Carman, Coskun Yarar

Abstract:

Introduction: Measuring the diameter of the optic nerve sheath is a noninvasive and easy to use imaging technique to predict intracranial pressure in children and adults. The aim was to measure the diameter of the optic nerve sheath in pediatric head trauma. Methods: The study group consisted of 40 children with healthy and 40 patients with head trauma. Transorbital sonographic measurement of the optic nerve sheath diameter was performed. Conclusion: The mean diameters of the optic nerve sheath of right and left eyes were 0.408 ± 0.064 mm and 0.417 ± 0.065 mm, respectively, in the trauma group. These results were higher in patients than in control group. There was a negative correlation between optic nerve sheath diameters and Glasgow Coma Scales in patients with head trauma (p < 0.05). There was a positive correlation between optic nerve sheath diameters and positive CT findings, systolic blood pressure in patients with head trauma. The clinical status of the patients at admission, blood pH and lactate level were related to the optic nerve sheath diameter. Conclusion: Measuring the diameter of the optic nerve sheath is not an invasive technique and can be easily used to predict increased intracranial pressure and to prevent secondary brain injury.

Keywords: head trauma, intracranial pressure, optic nerve, sonography

Procedia PDF Downloads 163
8117 Fuzzy Time Series- Markov Chain Method for Corn and Soybean Price Forecasting in North Carolina Markets

Authors: Selin Guney, Andres Riquelme

Abstract:

Among the main purposes of optimal and efficient forecasts of agricultural commodity prices is to guide the firms to advance the economic decision making process such as planning business operations and marketing decisions. Governments are also the beneficiaries and suppliers of agricultural price forecasts. They use this information to establish a proper agricultural policy, and hence, the forecasts affect social welfare and systematic errors in forecasts could lead to a misallocation of scarce resources. Various empirical approaches have been applied to forecast commodity prices that have used different methodologies. Most commonly-used approaches to forecast commodity sectors depend on classical time series models that assume values of the response variables are precise which is quite often not true in reality. Recently, this literature has mostly evolved to a consideration of fuzzy time series models that provide more flexibility in terms of the classical time series models assumptions such as stationarity, and large sample size requirement. Besides, fuzzy modeling approach allows decision making with estimated values under incomplete information or uncertainty. A number of fuzzy time series models have been developed and implemented over the last decades; however, most of them are not appropriate for forecasting repeated and nonconsecutive transitions in the data. The modeling scheme used in this paper eliminates this problem by introducing Markov modeling approach that takes into account both the repeated and nonconsecutive transitions. Also, the determination of length of interval is crucial in terms of the accuracy of forecasts. The problem of determining the length of interval arbitrarily is overcome and a methodology to determine the proper length of interval based on the distribution or mean of the first differences of series to improve forecast accuracy is proposed. The specific purpose of this paper is to propose and investigate the potential of a new forecasting model that integrates methodologies for determining the proper length of interval based on the distribution or mean of the first differences of series and Fuzzy Time Series- Markov Chain model. Moreover, the accuracy of the forecasting performance of proposed integrated model is compared to different univariate time series models and the superiority of proposed method over competing methods in respect of modelling and forecasting on the basis of forecast evaluation criteria is demonstrated. The application is to daily corn and soybean prices observed at three commercially important North Carolina markets; Candor, Cofield and Roaring River for corn and Fayetteville, Cofield and Greenville City for soybeans respectively. One main conclusion from this paper is that using fuzzy logic improves the forecast performance and accuracy; the effectiveness and potential benefits of the proposed model is confirmed with small selection criteria value such MAPE. The paper concludes with a discussion of the implications of integrating fuzzy logic and nonarbitrary determination of length of interval for the reliability and accuracy of price forecasts. The empirical results represent a significant contribution to our understanding of the applicability of fuzzy modeling in commodity price forecasts.

Keywords: commodity, forecast, fuzzy, Markov

Procedia PDF Downloads 220
8116 Assets and Health: Examining the Asset-Building Theoretical Framework and Psychological Distress

Authors: Einav Srulovici, Michal Grinstein-Weiss, George Knafl, Linda Beeber, Shawn Kneipp, Barbara Mark

Abstract:

Background: The asset-building theoretical framework (ABTF) is acknowledged as the most complete framework thus far for depicting the relationships between asset accumulation (the stock of a household’s saved resources available for future investment) and health outcomes. Although the ABTF takes into consideration the reciprocal relationship between asset accumulation and health, no ABTF based study has yet examined this relationship. Therefore, the purpose of this study was to test the ABTF and psychological distress, focusing on the reciprocal relationship between assets accumulation and psychological distress. Methods: The study employed longitudinal data from 6,295 families from the 2001 and 2007 Panel Study of Income Dynamics data sets. Structural equation modeling (SEM) was used to test the reciprocal relationship between asset accumulation and psychological distress. Results: In general, the data displayed a good fit to the model. The longitudinal SEM found that asset accumulation significantly increased with a decreased in psychological distress over time, while psychological distress significantly increased with an increase in asset accumulation over time, confirming the existence of the hypothesized reciprocal relationship. Conclusions: Individuals who are less psychological distressed might have more energy to engage in activities, such as furthering their education or obtaining better jobs that are in turn associated with greater asset accumulation, while those who have greater assets may invest those assets in riskier investments, resulting in increased psychological distress. The confirmation of this reciprocal relationship highlights the importance of conducting longitudinal studies and testing the reciprocal relationship between asset accumulation and other health outcomes.

Keywords: asset-building theoretical framework, psychological distress, structural equation modeling, reciprocal relationship

Procedia PDF Downloads 398
8115 Business Model Innovation and Firm Performance: Exploring Moderation Effects

Authors: Mohammad-Ali Latifi, Harry Bouwman

Abstract:

Changes in the business environment accelerated dramatically over the last decades as a result of changes in technology, regulation, market, and competitors’ behavior. Firms need to change the way they do business in order to survive or maintain their growth. Innovating business model (BM) can create competitive advantages and enhance firm performance. However, many companies fail to achieve expected outcomes in practice, mostly due to irreversible fundamental changes in key components of the company’s BM. This leads to more ambiguity, uncertainty, and risks associated with business performance. However, the relationship among BM Innovation, moderating factors, and the firm’s overall performance is by and large ignored in the current literature. In this study, we identified twenty moderating factors from our comprehensive literature review. We categorized these factors based on two criteria regarding the extent to which: the moderating factors can be controlled and managed by firms, and they are generic or specific changes to the firms. This leads to four moderation groups. The first group is BM implementation, which includes management support, employees’ commitment, employees’ skills, communication, detailed plan. The second group is called BM practices, which consists of BM tooling, BM experimentation, the scope of change, speed of change, degree of novelty. The third group is Firm characteristics, including firm size, age, and ownership. The last group is called Industry characteristics, which considers the industry sector, competitive intensity, industry life cycle, environmental dynamism, high-tech vs. low-tech industry. Through collecting data from 508 European small and medium-sized enterprises (SMEs) and using the structural equation modeling technique, the developed moderation model was examined. Results revealed that all factors highlighted through these four groups moderate the relation between BMI and firm performance significantly. Particularly, factors related to BM-Implementation and BM-Practices are more manageable and would potentially improve firm overall performance. We believe that this result is more important for researchers and practitioners since the possibility of working on factors in Firm characteristics and Industry characteristics groups are limited, and the firm can hardly control and manage them to improve the performance of BMI efforts.

Keywords: business model innovation, firm performance, implementation, moderation

Procedia PDF Downloads 123
8114 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation

Procedia PDF Downloads 203
8113 Efficiency Improvement for Conventional Rectangular Horn Antenna by Using EBG Technique

Authors: S. Kampeephat, P. Krachodnok, R. Wongsan

Abstract:

The conventional rectangular horn has been used for microwave antenna a long time. Its gain can be increased by enlarging the construction of horn to flare exponentially. This paper presents a study of the shaped woodpile Electromagnetic Band Gap (EBG) to improve its gain for conventional horn without construction enlargement. The gain enhancement synthesis method for shaped woodpile EBG that has to transfer the electromagnetic fields from aperture of a horn antenna through woodpile EBG is presented by using the variety of shaped woodpile EBGs such as planar, triangular, quadratic, circular, gaussian, cosine, and squared cosine structures. The proposed technique has the advantages of low profile, low cost for fabrication and light weight. The antenna characteristics such as reflection coefficient (S11), radiation patterns and gain are simulated by utilized A Computer Simulation Technology (CST) software. With the proposed concept, an antenna prototype was fabricated and experimented. The S11 and radiation patterns obtained from measurements show a good impedance matching and a gain enhancement of the proposed antenna. The gain at dominant frequency of 10 GHz is 25.6 dB, application for X- and Ku-Band Radar, that higher than the gain of the basic rectangular horn antenna around 8 dB with adding only one appropriated EBG structures.

Keywords: conventional rectangular horn antenna, electromagnetic band gap, gain enhancement, X- and Ku-band radar

Procedia PDF Downloads 285
8112 Impact of Applying Bag House Filter Technology in Cement Industry on Ambient Air Quality - Case Study: Alexandria Cement Company

Authors: Haggag H. Mohamed, Ghatass F. Zekry, Shalaby A. Elsayed

Abstract:

Most sources of air pollution in Egypt are of anthropogenic origin. Alexandria Governorate is located at north of Egypt. The main contributing sectors of air pollution in Alexandria are industry, transportation and area source due to human activities. Alexandria includes more than 40% of the industrial activities in Egypt. Cement manufacture contributes a significant amount to the particulate pollution load. Alexandria Portland Cement Company (APCC) surrounding was selected to be the study area. APCC main kiln stack Total Suspended Particulate (TSP) continuous monitoring data was collected for assessment of dust emission control technology. Electro Static Precipitator (ESP) was fixed on the cement kiln since 2002. The collected data of TSP for first quarter of 2012 was compared to that one in first quarter of 2013 after installation of new bag house filter. In the present study, based on these monitoring data and metrological data a detailed air dispersion modeling investigation was carried out using the Industrial Source Complex Short Term model (ISC3-ST) to find out the impact of applying new bag house filter control technology on the neighborhood ambient air quality. The model results show a drastic reduction of the ambient TSP hourly average concentration from 44.94μg/m3 to 5.78μg/m3 which assures the huge positive impact on the ambient air quality by applying bag house filter technology on APCC cement kiln

Keywords: air pollution modeling, ambient air quality, baghouse filter, cement industry

Procedia PDF Downloads 272
8111 Random Variation of Treated Volumes in Fractionated 2D Image Based HDR Brachytherapy for Cervical Cancer

Authors: R. Tudugala, B. M. A. I. Balasooriya, W. M. Ediri Arachchi, R. W. M. W. K. Rathnayake, T. D. Premaratna

Abstract:

Brachytherapy involves placing a source of radiation near the cancer site which gives promising prognosis for cervical cancer treatments. The purpose of this study was to evaluate the effect of random variation of treated volumes in between fractions in the 2D image based fractionated high dose rate brachytherapy for cervical cancer at National Cancer Institute Maharagama, Sri Lanka. Dose plans were analyzed for 150 cervical cancer patients with orthogonal radiographs (2D) based brachytherapy. ICRU treated volumes was modeled by translating the applicators with the help of “Multisource HDR plus software”. The difference of treated volumes with respect to the applicator geometry was analyzed by using SPSS 18 software; to derived patient population based estimates of delivered treated volumes relative to ideally treated volumes. Packing was evaluated according to bladder dose, rectum dose and geometry of the dose distribution by three consultant radiation oncologist. The difference of treated volumes depends on types of the applicators, which was used in fractionated brachytherapy. The means of the “Difference of Treated Volume” (DTV) for “Evenly activated tandem (ET)” length” group was ((X_1)) -0.48 cm3 and ((X_2)) 11.85 cm3 for “Unevenly activated tandem length (UET) group. The range of the DTV for ET group was 35.80 cm3 whereas UET group 104.80 cm3. One sample T test was performed to compare the DTV with “Ideal treatment volume difference (0.00cm3)”. It is evident that P value was 0.732 for ET group and for UET it was 0.00 moreover independent two sample T test was performed to compare ET and UET groups and calculated P value was 0.005. Packing was evaluated under three categories 59.38% used “Convenient Packing Technique”, 33.33% used “Fairly Packing Technique” and 7.29% used “Not Convenient Packing” in their fractionated brachytherapy treatments. Random variation of treated volume in ET group is much lower than UET group and there is a significant difference (p<0.05) in between ET and UET groups which affects the dose distribution of the treatment. Furthermore, it can be concluded nearly 92.71% patient’s packing were used acceptable packing technique at NCIM, Sri Lanka.

Keywords: brachytherapy, cervical cancer, high dose rate, tandem, treated volumes

Procedia PDF Downloads 204
8110 Recurrence of Pterygium after Surgery and the Effect of Surgical Technique on the Recurrence of Pterygium in Patients with Pterygium

Authors: Luksanaporn Krungkraipetch

Abstract:

A pterygium is an eye surface lesion that begins in the limbal conjunctiva and progresses to the cornea. The lesion is more common in the nasal limbus than in the temporal, and it has a distinctive wing-like aspect. Indications for surgery, in decreasing order of significance, are grown over the corneal center, decreased vision due to corneal deformation, documented growth, sensations of discomfort, and aesthetic concerns. Recurrent pterygium results in the loss of time, the expense of therapy, and the potential for vision impairment. The objective of this study is to find out how often the recurrence of pterygium after surgery occurs, what effect the surgery technique has, and what causes them to come back in people with pterygium. Materials and Methods: Observational case control in retrospect: the study involves a retrospective analysis of 164 patient samples. Data analysis is descriptive statistics analysis, i.e., basic data details about pterygium surgery and the risk of recurrent pterygium. For factor analysis, the inferential statistics odds ratio (OR) and 95% confidence interval (CI) ANOVA are utilized. A p-value of 0.05 was deemed statistically important. Results: The majority of patients, according to the results, were female (60.4%). Twenty-four of the 164 (14.6%) patients who underwent surgery exhibited recurrent pterygium. The average age is 55.33 years old. Postoperative recurrence was reported in 19 cases (79.3%) of bare sclera techniques and five cases (20.8%) of conjunctival autograft techniques. The recurrence interval is 10.25 months, with the most common (54.17 percent) being 12 months. In 91.67 percent of cases, all follow-ups are successful. The most common recurrence level is 1 (25%). A surgical complication is a subconjunctival hemorrhage (33.33 percent). Comparing the surgeries done on people with recurrent pterygium didn't show anything important (F = 1.13, p = 0.339). Age significantly affected the recurrence of pterygium (95% CI, 6.79-63.56; OR = 20.78, P 0.001). Conclusion: This study discovered a 14.6% rate of pterygium recurrence after pterygium surgery. Across all surgeries and patients, the rate of recurrence was four times higher with the bare sclera method than with conjunctival autograft. The researchers advise selecting a more conventional surgical technique to avoid a recurrence.

Keywords: pterygium, recurrence pterygium, pterygium surgery, excision pterygium

Procedia PDF Downloads 93
8109 Pattern the Location and Area of Earth-Dumping Stations from Vehicle GPS Data in Taiwan

Authors: Chun-Yuan Chen, Ming-Chang Li, Xiu-Hui Wen, Yi-Ching Tu

Abstract:

The objective of this study explores GPS (Global Positioning System) applied to trace construction vehicles such as trucks or cranes, help to pattern the earth-dumping stations of traffic construction in Taiwan. Traffic construction in this research is defined as the engineering of high-speed railways, expressways, and which that distance more than kilometers. Audit the location and check the compliance with regulations of earth-dumping stations is one of important tasks in Taiwan EPA. Basically, the earth-dumping station was known as one source of particulate matter from air pollution during construction process. Due to GPS data can be analyzed quickly and be used conveniently, this study tried to find out dumping stations by modeling vehicles tracks from GPS data during work cycle of construction. The GPS data updated from 13 vehicles related to an expressway construction in central Taiwan. The GPS footprints were retrieved to Keyhole Markup Language (KML) files so that can pattern the tracks of trucks by computer applications, the data was collected about eight months- from Feb. to Oct. in 2017. The results of GPS footprints identified dumping station and outlined the areas of earthwork had been passed to the Taiwan EPA for on-site inspection. Taiwan EPA had issued advice comments to the agency which was in charge of the construction to prevent the air pollution. According to the result of this study compared to the commonly methods in inspecting environment by manual collection, the GPS with KML patterning and modeling method can consumes less time. On the other hand, through monitoring the GPS data from construction vehicles could be useful for administration to development and implementation of strategies in environmental management.

Keywords: automatic management, earth-dumping station, environmental management, Global Positioning System (GPS), particulate matter, traffic construction

Procedia PDF Downloads 167
8108 A Kierkegaardian Reading of Iqbal's Poetry as a Communicative Act

Authors: Sevcan Ozturk

Abstract:

The overall aim of this paper is to present a Kierkegaardian approach to Iqbal’s use of literature as a form of communication. Despite belonging to different historical, cultural, and religious backgrounds, the philosophical approaches of Soren Kierkegaard, ‘the father of existentialism,' and Muhammad Iqbal ‘the spiritual father of Pakistan’ present certain parallels. Both Kierkegaard and Iqbal take human existence as the starting point for their reflections, emphasise the subject of becoming genuine religious personalities, and develop a notion of the self. While doing these they both adopt parallel methods, employ literary techniques and poetical forms, and use their literary works as a form of communication. The problem is that Iqbal does not provide a clear account of his method as Kierkegaard does in his works. As a result, Iqbal’s literary approach appears to be a collection of contradictions. This is mainly because despite he writes most of his works in the poetical form, he condemns all kinds of art including poetry. Moreover, while attacking on Islamic mysticism, he, at the same time, uses classical literary forms, and a number of traditional mystical, poetic symbols. This paper will argue that the contradictions found in Iqbal’s approach are actually a significant part of Iqbal’s way of communicating his reader. It is the contention of this paper that with the help of the parallels between the literary and philosophical theories of Kierkegaard and Iqbal, the application of Kierkegaard’s method to Iqbal’s use of poetry as a communicative act will make it possible to dispel the seeming ambiguities in Iqbal’s literary approach. The application of Kierkegaard’s theory to Iqbal’s literary method will include an analysis of the main principles of Kierkegaard’s own literary technique of ‘indirect communication,' which is a crucial term of his existentialist philosophy. Second, the clash between what Iqbal’s says about art and poetry and what he does will be highlighted in the light of Kierkegaardian theory of indirect communication. It will be argued that Iqbal’s literary technique can be considered as a form of ‘indirect communication,' and that reading his technique in this way helps on dispelling the contradictions in his approach. It is hoped that this paper will cultivate a dialogue between those who work in the fields of comparative philosophy Kierkegaard studies, existentialism, contemporary Islamic thought, Iqbal studies, and literary criticism.

Keywords: comparative philosophy, existentialism, indirect communication, intercultural philosophy, literary communication, Muhammad Iqbal, Soren Kierkegaard

Procedia PDF Downloads 343
8107 Empirical Green’s Function Technique for Accelerogram Synthesis: The Problem of the Use for Marine Seismic Hazard Assessment

Authors: Artem A. Krylov

Abstract:

Instrumental seismological researches in water areas are complicated and expensive, that leads to the lack of strong motion records in most offshore regions. In the same time the number of offshore industrial infrastructure objects, such as oil rigs, subsea pipelines, is constantly increasing. The empirical Green’s function technique proved to be very effective for accelerograms synthesis under the conditions of poorly described seismic wave propagation medium. But the selection of suitable small earthquake record in offshore regions as an empirical Green’s function is a problem because of short seafloor instrumental seismological investigation results usually with weak micro-earthquakes recordings. An approach based on moving average smoothing in the frequency domain is presented for preliminary processing of weak micro-earthquake records before using it as empirical Green’s function. The method results in significant waveform correction for modeled event. The case study for 2009 L’Aquila earthquake was used to demonstrate the suitability of the method. This work was supported by the Russian Foundation of Basic Research (project № 18-35-00474 mol_a).

Keywords: accelerogram synthesis, empirical Green's function, marine seismology, microearthquakes

Procedia PDF Downloads 329
8106 Clustering of Association Rules of ISIS & Al-Qaeda Based on Similarity Measures

Authors: Tamanna Goyal, Divya Bansal, Sanjeev Sofat

Abstract:

In world-threatening terrorist attacks, where early detection, distinction, and prediction are effective diagnosis techniques and for functionally accurate and precise analysis of terrorism data, there are so many data mining & statistical approaches to assure accuracy. The computational extraction of derived patterns is a non-trivial task which comprises specific domain discovery by means of sophisticated algorithm design and analysis. This paper proposes an approach for similarity extraction by obtaining the useful attributes from the available datasets of terrorist attacks and then applying feature selection technique based on the statistical impurity measures followed by clustering techniques on the basis of similarity measures. On the basis of degree of participation of attributes in the rules, the associative dependencies between the attacks are analyzed. Consequently, to compute the similarity among the discovered rules, we applied a weighted similarity measure. Finally, the rules are grouped by applying using hierarchical clustering. We have applied it to an open source dataset to determine the usability and efficiency of our technique, and a literature search is also accomplished to support the efficiency and accuracy of our results.

Keywords: association rules, clustering, similarity measure, statistical approaches

Procedia PDF Downloads 324
8105 Thermomechanical Effects and Nanoscale Ripples in Graphene

Authors: Roderick Melnik, Sanjay Prabhakar

Abstract:

The relaxed state of graphene nanostructures due to externally applied tensile stress along both the armchair and zigzag directions are analyzed in detail. The results, obtained with the Finite Element Method (FEM), demonstrate that the amplitude of ripple waves in such nanostructures increases with temperature. Details of the multi-scale multi-physics computational procedure developed for this analysis are also provided.

Keywords: nanostructures, modeling, coupled processes, computer-aided design, nanotechnological applications

Procedia PDF Downloads 318
8104 Heat Transfer Characteristics on Blade Tip with Unsteady Wake

Authors: Minho Bang, Seok Min Choi, Jun Su Park, Hokyu Moon, Hyung Hee Cho

Abstract:

Present study investigates the effect of unsteady wakes on heat transfer in blade tip. Heat/mass transfer was measured in blade tip region depending on a variety of strouhal number by naphthalene sublimation technique. Naphthalene sublimation technique measures heat transfer using a heat/mass transfer analogy. Experiments are performed in linear cascade which is composed of five turbine blades and rotating rods. Strouhal number of inlet flow are changed ranging from 0 to 0.22. Reynolds number is 100,000 based on 11.4 m/s of outlet flow and axial chord length. Three different squealer tip geometries such as base squealer tip, vertical rib squealer tip, and camber line squealer tip are used to study how unsteady wakes affect heat transfer on a blade tip. Depending on squealer tip geometry, different flow patterns occur on a blade tip. Also, unsteady wakes cause reduced tip leakage flow and turbulent flow. As a result, as strouhal number increases, heat/mass transfer coefficients decrease due to the reduced leakage flow. As strouhal number increases, heat/ mass transfer coefficients on a blade tip increase in vertical rib squealer tip.

Keywords: gas turbine, blade tip, heat transfer, unsteady wakes

Procedia PDF Downloads 382
8103 Role of Molecular Changes and Immunohistochemical in Early Detection of Liver Cancer

Authors: Fatimah A. Alhomaid

Abstract:

The present study was planned to investigate the role of molecular changes and immunohistochemical in early detection of liver cancer in Saudi patients. our results were carried out on 54 patients liver cancer. We obtained our data from laboratory in King Khalid University Hospital. The specimens were taken (54) patients with liver cancer 34 male and 14 female and 2 control. The average age of varied from 37-85 years. The tumor was diagnosed as grade I in tow patients (male and female) and grade 2 in 45 patients (28 male and 17 female) while the grade 3 in 4 patients (all males). The specimens were processed for haematoxylin and eosin staining, immunohistochemical technique and flow cytometry analysis. Our study noted that most patients had adenocarcinoma which characterized by presence of signet-ring cells were very clear in advanced patients with adenocarcinoma. Our sections in adenocarcinoma in grade 2 and stage 3 had an increase in signet ring cells,an increase in the acini of glands and an increase in number of lymphocytes which spread to the muscular layer. With advancing the disease, there were haemorrhage in blood and increase in lymphocytes and increase in the number of nuclei in the tubular glands. Our study was carried on 48 patients, immunohistochemical diagnosis (CK20, PCNA, P53) and the analysis of DNA content by flow cytometry technique. Our study indicated that the presence of correlation between the immunohistochemical analysis for P53 and the grades. The reaction of P53 appeared as strong in nucleus in grades &stage 3 and appeared in other sections as dark brown pigment. Our study indicated that the absence of correlation between the immunohistochemical analysis for PCAN and the grades. In our sections there were strong reaction in the more 80% of nuclei in grade 1& stage 2. Our study indicated that the presence of correlation between the immunohistochemical analysis for CK20 and the grades. Our results indicated the presence of positive reaction in cytoplasm varied from weak to moderate in grade 3 & stage 4. Concerning the Flow cytometry technique our results indicated that the presence of correlation between the DNA and different stages of liver cancer.

Keywords: cancer, CK20, DNA, cytometry analysis, liver, immunohistochemical, molecular changes, PCNA, p53

Procedia PDF Downloads 271
8102 Implementation of Free-Field Boundary Condition for 2D Site Response Analysis in OpenSees

Authors: M. Eskandarighadi, C. R. McGann

Abstract:

It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristics experience at the site. One-dimensional seismic site response analysis is the most common approach for investigating site response. This approach assumes that soil is homogeneous and infinitely extended in the horizontal direction. Therefore, tying side boundaries together is one way to model this behavior, as the wave passage is assumed to be only vertical. However, 1D analysis cannot capture the 2D nature of wave propagation, soil heterogeneity, and 2D soil profile with features such as inclined layer boundaries. In contrast, 2D seismic site response modeling can consider all of the mentioned factors to better understand local site effects on strong ground motions. 2D wave propagation and considering that the soil profile on the two sides of the model may not be identical clarifies the importance of a boundary condition on each side that can minimize the unwanted reflections from the edges of the model and input appropriate loading conditions. Ideally, the model size should be sufficiently large to minimize the wave reflection, however, due to computational limitations, increasing the model size is impractical in some cases. Another approach is to employ free-field boundary conditions that take into account the free-field motion that would exist far from the model domain and apply this to the sides of the model. This research focuses on implementing free-field boundary conditions in OpenSees for 2D site response analysisComparisons are made between 1D models and 2D models with various boundary conditions, and details and limitations of the developed free-field boundary modeling approach are discussed.

Keywords: boundary condition, free-field, opensees, site response analysis, wave propagation

Procedia PDF Downloads 166
8101 A Technique for Image Segmentation Using K-Means Clustering Classification

Authors: Sadia Basar, Naila Habib, Awais Adnan

Abstract:

The paper presents the Technique for Image Segmentation Using K-Means Clustering Classification. The presented algorithms were specific, however, missed the neighboring information and required high-speed computerized machines to run the segmentation algorithms. Clustering is the process of partitioning a group of data points into a small number of clusters. The proposed method is content-aware and feature extraction method which is able to run on low-end computerized machines, simple algorithm, required low-quality streaming, efficient and used for security purpose. It has the capability to highlight the boundary and the object. At first, the user enters the data in the representation of the input. Then in the next step, the digital image is converted into groups clusters. Clusters are divided into many regions. The same categories with same features of clusters are assembled within a group and different clusters are placed in other groups. Finally, the clusters are combined with respect to similar features and then represented in the form of segments. The clustered image depicts the clear representation of the digital image in order to highlight the regions and boundaries of the image. At last, the final image is presented in the form of segments. All colors of the image are separated in clusters.

Keywords: clustering, image segmentation, K-means function, local and global minimum, region

Procedia PDF Downloads 380
8100 A Propose of Personnel Assessment Method Including a Two-Way Assessment for Evaluating Evaluators and Employees

Authors: Shunsuke Saito, Kazuho Yoshimoto, Shunichi Ohmori, Sirawadee Arunyanart

Abstract:

In this paper, we suggest a mechanism of assessment that rater and Ratee (or employees) to convince. There are many problems exist in the personnel assessment. In particular, we were focusing on the three. (1) Raters are not sufficiently recognized assessment point. (2) Ratee are not convinced by the mechanism of assessment. (3) Raters (or Evaluators) and ratees have empathy. We suggest 1: Setting of "understanding of the assessment points." 2: Setting of "relative assessment ability." 3: Proposal of two-way assessment mechanism to solve these problems. As a prerequisite, it is assumed that there are multiple raters. This is because has been a growing importance of multi-faceted assessment. In this model, it determines the weight of each assessment point evaluators by the degree of understanding and assessment ability of raters and ratee. We used the ANP (Analytic Network Process) is a theory that an extension of the decision-making technique AHP (Analytic Hierarchy Process). ANP can be to address the problem of forming a network and assessment of Two-Way is possible. We apply this technique personnel assessment, the weights of rater of each point can be reasonably determined. We suggest absolute assessment for Two-Way assessment by ANP. We have verified that the consent of the two approaches is higher than conventional mechanism. Also, human resources consultant we got a comment about the application of the practice.

Keywords: personnel evaluation, pairwise comparison, analytic network process (ANP), two-ways

Procedia PDF Downloads 386
8099 A Basic Modeling Approach for the 3D Protein Structure of Insulin

Authors: Daniel Zarzo Montes, Manuel Zarzo Castelló

Abstract:

Proteins play a fundamental role in biology, but their structure is complex, and it is a challenge for teachers to conceptually explain the differences between their primary, secondary, tertiary, and quaternary structures. On the other hand, there are currently many computer programs to visualize the 3D structure of proteins, but they require advanced training and knowledge. Moreover, it becomes difficult to visualize the sequence of amino acids in these models, and how the protein conformation is reached. Given this drawback, a simple and instructive procedure is proposed in order to teach the protein structure to undergraduate and graduate students. For this purpose, insulin has been chosen because it is a protein that consists of 51 amino acids, a relatively small number. The methodology has consisted of the use of plastic atom models, which are frequently used in organic chemistry and biochemistry to explain the chirality of biomolecules. For didactic purposes, when the aim is to teach the biochemical foundations of proteins, a manipulative system seems convenient, starting from the chemical structure of amino acids. It has the advantage that the bonds between amino acids can be conveniently rotated, following the pattern marked by the 3D models. First, the 51 amino acids were modeled, and then they were linked according to the sequence of this protein. Next, the three disulfide bonds that characterize the stability of insulin have been established, and then the alpha-helix structure has been formed. In order to reach the tertiary 3D conformation of this protein, different interactive models available on the Internet have been visualized. In conclusion, the proposed methodology seems very suitable for biology and biochemistry students because they can learn the fundamentals of protein modeling by means of a manipulative procedure as a basis for understanding the functionality of proteins. This methodology would be conveniently useful for a biology or biochemistry laboratory practice, either at the pre-graduate or university level.

Keywords: protein structure, 3D model, insulin, biomolecule

Procedia PDF Downloads 62
8098 Application of Voltammetry as a Non-Destructive Tool to Quantify Cathodic Protection of Steel in Simulated Soil Solution

Authors: Mandlenkosi G. R. Mahlobo, Peter A. Olubambi

Abstract:

Cathodic protection (CP) has been widely considered as a suitable technique for mitigating corrosion of steel structures buried in soil. Plenty of efforts have been made in developing techniques, in particular non-destructive techniques, for monitoring and quantifying the effectiveness of CP to ensure the sustainability and performance of buried steel structures. This study was aimed at using a specifically modified voltammetry approach as a non-destructive tool to monitor and quantify the effectiveness of CP of steel in simulated soil. Carbon steel was subjected to electrochemical tests with NS4 solution used as simulated soil conditions for four days before applying CP for further 11 days. A specifically modified voltammetry technique was applied at various time intervals of the experiment to monitor the corrosion behaviour and therefore reflect CP effectiveness. The voltammetry results revealed that the application of CP reduced the corrosion rate from the highest value of 410 µm/yr to 8 µm/yr between days 5 and 14 of the experiments. The microstructural analysis of the steel surface performed using x-ray diffraction identified calcareous deposit as the dominant phase protecting the surface from corrosion. It was deduced that the formation of calcareous deposits was linked with the effectiveness of CP of steel.

Keywords: carbon steel, cathodic protection, NS4 solution, voltammetry, XRD

Procedia PDF Downloads 73
8097 Video Materials as a Persuasive Strategy in Tourism Discourse

Authors: Ganna Zakharova

Abstract:

The persuasive influence of tourism promotional materials is very much experienced nowadays. In order to attract the attention of viewers, marketers choose various techniques in their digital texts. Video is an essential element for attraction and seduction; it is a trigger element for tourists. This solution for web marketing engages and convinces potential tourists to book a tourism product. Embedding video materials into a website provides useful information, create different feelings in viewers, and help them finalize their decisions. The present article discusses video solutions for health tourism websites used to allure potential tourists. The paper reviews the influential elements of persuasive tourism marketing videos. The article highlights how these components as persuasive strategies of tourism promotional materials can influence the decisions of tourism websites’ users. The result section provides the real examples of the deployment of the mentioned technique to convince the audience by the website of 'Karpaty' resort (Ukraine). This technique is worth attention as it plays an important role in the promotion of tourism services. The data collection of this study will provide updated information in relation to the rhetoric of tourism.

Keywords: tourism discourse, persuasive video, influential videos in marketing, persuasive discourse, tourism promotion

Procedia PDF Downloads 121
8096 Ground Improvement Using Deep Vibro Techniques at Madhepura E-Loco Project

Authors: A. Sekhar, N. Ramakrishna Raju

Abstract:

This paper is a result of ground improvement using deep vibro techniques with combination of sand and stone columns performed on a highly liquefaction susceptible site (70 to 80% sand strata and balance silt) with low bearing capacities due to high settlements located (earth quake zone V as per IS code) at Madhepura, Bihar state in northern part of India. Initially, it was envisaged with bored cast in-situ/precast piles, stone/sand columns. However, after detail analysis to address both liquefaction and improve bearing capacities simultaneously, it was analyzed the deep vibro techniques with combination of sand and stone columns is excellent solution for given site condition which may be first time in India. First after detail soil investigation, pre eCPT test was conducted to evaluate the potential depth of liquefaction to densify silty sandy soils to improve factor of safety against liquefaction. Then trail test were being carried out at site by deep vibro compaction technique with sand and stone columns combination with different spacings of columns in triangular shape with different timings during each lift of vibro up to ground level. Different spacings and timing was done to obtain the most effective spacing and timing with vibro compaction technique to achieve maximum densification of saturated loose silty sandy soils uniformly for complete treated area. Then again, post eCPT test and plate load tests were conducted at all trail locations of different spacings and timing of sand and stone columns to evaluate the best results for obtaining the required factor of safety against liquefaction and the desired bearing capacities with reduced settlements for construction of industrial structures. After reviewing these results, it was noticed that the ground layers are densified more than the expected with improved factor of safety against liquefaction and achieved good bearing capacities for a given settlements as per IS codal provisions. It was also worked out for cost-effectiveness of lightly loaded single storied structures by using deep vibro technique with sand column avoiding stone. The results were observed satisfactory for resting the lightly loaded foundations. In this technique, the most important is to mitigating liquefaction with improved bearing capacities and reduced settlements to acceptable limits as per IS: 1904-1986 simultaneously up to a depth of 19M. To our best knowledge it was executed first time in India.

Keywords: ground improvement, deep vibro techniques, liquefaction, bearing capacity, settlement

Procedia PDF Downloads 199