Search results for: test and simulation
1799 Impact of Locally Synthesized Carbon Nanotubes against Some Local Clinical Bacterial Isolates
Authors: Abdul Matin, Muazzama Akhtar, Shahid Nisar, Saddaf Mazzar, Umer Rashid
Abstract:
Antibiotic resistance is an increasing concern worldwide now a day. Neisseria gonorrhea and Staphylococcus aureus are known to cause major human sexually transmitted and respiratory diseases respectively. Nanotechnology is an emerging discipline and its application in various fields especially in medical sciences is gigantic. In the present study, we synthesized multi-walled carbon nanotubes (MWNTs) using acid oxidation method and solubilized MWNTs were with length predominantly >500 nm and diameters ranging from 40 to 50 nm. The locally synthesized MWNTs were used against gram positive and negative bacteria to determine their impact on bacterial growth. Clinical isolates of Neisseria gonorrhea (isolate: 4C-11) and Staphylococcus aureus (isolate: 38541) were obtained from local hospital and normally cultured in LB broth at 37°C. Both clinical strains can be obtained on request from University of Gujarat. Spectophometric assay was performed to determine the impact of MWNTs on bacterial growth in vitro. To determine the effect of MWTNs on test organisms, various concentration of MWNTs were used and recorded observation on various time intervals to understand the growth inhibition pattern. Our results demonstrated that MWNTs exhibited toxic effects to Staphylococcus aureus while showed very limited growth inhibition to Neisseria gonorrhea, which suggests the resistant potential of Neisseria against nanoparticles. Our results clearly demonstrate the gradual decrease in bacterial numbers with passage of time when compared with control. Maximum bacterial inhibition was observed at maximum concentration (50 µg/ml). Our future work will include further characterization and mode of action of our locally synthesized MWNTs. In conclusion, we investigated and reported for the first time the inhibitory potential of locally synthesized MWNTs on local clinical isolates of Staphylococcus aureus and Neisseria gonorrhea.Keywords: antibacterial activity, multi walled carbon nanotubes, Neisseria gonorrhea, spectrophotometer assay, Staphylococcus aureus
Procedia PDF Downloads 3141798 SiO2-Ag+Chlorex vs SilverSulfaDiazine: An 'in vitro' and 'in vivo' Silver Challenge
Authors: Roberto Cassino, Valeria Dissette, Carlo Alberto Bignozzi, Daniele Pazzi
Abstract:
Background and Aims: The aim of this work was to investigate, both ‘in vitro’ and ‘in vivo’, if the new SCX technology (SiO2-Ag+Chlorex) can easily defeat infections and it is really more effective than SSD (SilverSulfaDiazine). ‘In vitro’ methods: we tested ‘in vitro’ the effectiveness of both silver materials using a pool of 5 strains: Pseudomonas Aeruginosa, Staphylococcus aureus, Escherichia Coli, Enterococcus hirae and Candida Albicans. 100 µl of this pool have been seeded on Petri dishes and kept for 24 hours in incubation at 37 C°. ‘In vivo’ methods: we enrolled patients with multiple infectious chronic wounds (according with cutting & harding criteria for infection); after a qualitative evaluation of the wounds bacterial population, taking a sample by plug, we included in the study 6 patients for a total of 10 wounds, infected by one or more of the microorganisms used for the ‘in vitro’ test. The protocol consisted of a treatment with a spray powder of SSD every 48 hours for 14 days; in case of worsening we should have to start a new treatment with a spray powder containing silicon dioxide, ionic silver and chlorexidine (SiO2-Ag+Chlorex) every 48 hours for 14 days. We evaluated the number of clinical signs of infection and the disappearance or not of the wound edge erithema. ‘In vitro’ results: SSD demonstrated a wide zone of inhibition within 24 hours, but after 5 days there was no more signs of inhibition; on the contrary SCX had a good inhibition ring that lasted more than 5 days. ‘In vivo’ results: all wounds treated with SSD got worse; the signs of infection increased and the wound edge erithema did not disappear. According with the protocol, we treated then all wounds with SCX and they all improved within the period of observation with complete disappearance of clinical signs of infection and no more wound edge erithema. Conclusions: the study demonstrated the effectiveness of SiO2-Ag+Chlorex, especially in terms of long lasting antimicrobial action. We had the same results ‘in vitro’, so that there has been a perfect correspondence between the laboratory outcomes and the clinical ones.Keywords: chronic wounds, infections, ionic silver, SSD
Procedia PDF Downloads 3341797 Solid Lipid Nanoparticles of Levamisole Hydrochloride
Authors: Surendra Agrawal, Pravina Gurjar, Supriya Bhide, Ram Gaud
Abstract:
Levamisole hydrochloride is a prominent anticancer drug in the treatment of colon cancer but resulted in toxic effects due poor bioavailability and poor cellular uptake by tumor cells. Levamisole is an unstable drug. Incorporation of this molecule in solid lipids may minimize their exposure to the aqueous environment and partly immobilize the drug molecules within the lipid matrix-both of which may protect the encapsulated drugs against degradation. The objectives of the study were to enhance bioavailability by sustaining drug release and to reduce the toxicities associated with the therapy. Solubility of the drug was determined in different lipids to select the components of Solid Lipid Nanoparticles (SLN). Pseudoternary phase diagrams were created using aqueous titration method. Formulations were subjected to particle size and stability evaluation to select the final test formulations which were characterized for average particle size, zeta potential, and in-vitro drug release and percentage transmittance to optimize the final formulation. SLN of Levamisole hydrochloride was prepared by Nanoprecipitation method. Glyceryl behenate (Compritol 888 ATO) was used as core comprising of Tween 80 as surfactant and Lecithin as co-surfactant in (1:1) ratio. Entrapment efficiency (EE) was found to be 45.89%. Particle size was found in the range of 100-600 nm. Zeta potential of the formulation was -17.0 mV revealing the stability of the product. In-vitro release study showed that 66 % drug released in 24 hours in pH 7.2 which represent that formulation can give controlled action at the intestinal environment. In pH 5.0 it showed 64% release indicating that it can even release drug in acidic environment of tumor cells. In conclusion, results revealed SLN to be a promising approach to sustain the drug release so as to increase bioavailability and cellular uptake of the drug with reduction in toxic effects as dose has been reduced with controlled delivery.Keywords: SLN, nanoparticulate delivery of levamisole, pharmacy, pharmaceutical sciences
Procedia PDF Downloads 4311796 Real-World Comparison of Adherence to and Persistence with Dulaglutide and Liraglutide in UAE e-Claims Database
Authors: Ibrahim Turfanda, Soniya Rai, Karan Vadher
Abstract:
Objectives— The study aims to compare real-world adherence to and persistence with dulaglutide and liraglutide in patients with type 2 diabetes (T2D) initiating treatment in UAE. Methods— This was a retrospective, non-interventional study (observation period: 01 March 2017–31 August 2019) using the UAE Dubai e-Claims database. Included: adult patients initiating dulaglutide/liraglutide 01 September 2017–31 August 2018 (index period) with: ≥1 claim for T2D in the 6 months before index date (ID); ≥1 claim for dulaglutide/liraglutide during index period; and continuous medical enrolment for ≥6 months before and ≥12 months after ID. Key endpoints, assessed 3/6/12 months after ID: adherence to treatment (proportion of days covered [PDC; PDC ≥80% considered ‘adherent’], per-group mean±standard deviation [SD] PDC); and persistence (number of continuous therapy days from ID until discontinuation [i.e., >45 days gap] or end of observation period). Patients initiating dulaglutide/liraglutide were propensity score matched (1:1) based on baseline characteristics. Between-group comparison of adherence was analysed using the McNemar test (α=0.025). Persistence was analysed using Kaplan–Meier estimates with log-rank tests (α=0.025) for between-group comparisons. This study presents 12-month outcomes. Results— Following propensity score matching, 263 patients were included in each group. Mean±SD PDC for all patients at 12 months was significantly higher in the dulaglutide versus the liraglutide group (dulaglutide=0.48±0.30, liraglutide=0.39±0.28, p=0.0002). The proportion of adherent patients favored dulaglutide (dulaglutide=20.2%, liraglutide=12.9%, p=0.0302), as did the probability of being adherent to treatment (odds ratio [97.5% CI]: 1.70 [0.99, 2.91]; p=0.03). Proportion of persistent patients also favoured dulaglutide (dulaglutide=15.2%, liraglutide=9.1%, p=0.0528), as did the probability of discontinuing treatment 12 months after ID (p=0.027). Conclusions— Based on the UAE Dubai e-Claims database data, dulaglutide initiators exhibited significantly greater adherence in terms of mean PDC versus liraglutide initiators. The proportion of adherent patients and the probability of being adherent favored the dulaglutide group, as did treatment persistence.Keywords: adherence, dulaglutide, effectiveness, liraglutide, persistence
Procedia PDF Downloads 1261795 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale
Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal
Abstract:
Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable
Procedia PDF Downloads 3021794 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System
Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal
Abstract:
The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.Keywords: microgravity effect, response surface, terminal speed, unmanned system
Procedia PDF Downloads 1731793 The Impacts of an Adapted Literature Circle Model on Reading Comprehension, Engagement, and Cooperation in an EFL Reading Course
Authors: Tiantian Feng
Abstract:
There is a dearth of research on the literary circle as a teaching strategy in English as a Foreign Language (EFL) classes in Chinese colleges and universities and even fewer empirical studies on its impacts. In this one-quarter, design-based project, the researcher aims to increase students’ engagement, cooperation, and, on top of that, reading comprehension performance by utilizing a researcher-developed, adapted reading circle model in an EFL reading course at a Chinese college. The model also integrated team-based learning and portfolio assessment, with an emphasis on the specialization of individual responsibilities, contributions, and outcomes in reading projects, with the goal of addressing current issues in EFL classes at Chinese colleges, such as passive learning, test orientation, ineffective and uncooperative teamwork, and lack of dynamics. In this quasi-experimental research, two groups of students enrolled in the course were invited to participate in four in-class team projects, with the intervention class following the adapted literature circle model and team members rotating as Leader, Coordinator, Brain trust, and Reporter. The researcher/instructor used a sequential explanatory mixed-methods approach to quantitatively analyze the final grades for the pre-and post-tests, as well as individual scores for team projects and will code students' artifacts in the next step, with the results to be reported in a subsequent paper(s). Initial analysis showed that both groups saw an increase in final grades, but the intervention group enjoyed a more significant boost, suggesting that the adapted reading circle model is effective in improving students’ reading comprehension performance. This research not only closes the empirical research gap of literature circles in college EFL classes in China but also adds to the pool of effective ways to optimize reading comprehension performance and class performance in college EFL classes.Keywords: literature circle, EFL teaching, college english reading, reading comprehension
Procedia PDF Downloads 1001792 Estimation of Physico-Mechanical Properties of Tuffs (Turkey) from Indirect Methods
Authors: Mustafa Gok, Sair Kahraman, Mustafa Fener
Abstract:
In rock engineering applications, determining uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), and basic index properties such as density, porosity, and water absorption is crucial for the design of both underground and surface structures. However, obtaining reliable samples for direct testing, especially from rocks that weather quickly and have low strength, is often challenging. In such cases, indirect methods provide a practical alternative to estimate the physical and mechanical properties of these rocks. In this study, tuff samples collected from the Cappadocia region (Nevşehir) in Turkey were subjected to indirect testing methods. Over 100 tests were conducted, using needle penetrometer index (NPI), point load strength index (PLI), and disc shear index (BPI) to estimate the uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), density, and water absorption index of the tuffs. The relationships between the results of these indirect tests and the target physical properties were evaluated using simple and multiple regression analyses. The findings of this research reveal strong correlations between the indirect methods and the mechanical properties of the tuffs. Both uniaxial compressive strength and Brazilian tensile strength could be accurately predicted using NPI, PLI, and BPI values. The regression models developed in this study allow for rapid, cost-effective assessments of tuff strength in cases where direct testing is impractical. These results are particularly valuable for geological engineering applications, where time and resource constraints exist. This study highlights the significance of using indirect methods as reliable predictors of the mechanical behavior of weak rocks like tuffs. Further research is recommended to explore the application of these methods to other rock types with similar characteristics. Further research is required to compare the results with those of established direct test methods.Keywords: brazilian tensile strength, disc shear strength, indirect methods, tuffs, uniaxial compressive strength
Procedia PDF Downloads 171791 Teaching Method for a Classroom of Students at Different Language Proficiency Levels: Content and Language Integrated Learning in a Japanese Culture Classroom
Authors: Yukiko Fujiwara
Abstract:
As a language learning methodology, Content and Language Integrated Learning (CLIL) has become increasingly prevalent in Japan. Most CLIL classroom practice and its research are conducted in EFL fields. However, much less research has been done in the Japanese language learning setting. Therefore, there are still many issues to work out using CLIL in the Japanese language teaching (JLT) setting. it is expected that more research will be conducted on both authentically and academically. Under such circumstances, this is one of the few classroom-based CLIL researches experiments in JLT and aims to find an effective course design for a class with students at different proficiency levels. The class was called ‘Japanese culture A’. This class was offered as one of the elective classes for International exchange students at a Japanese university. The Japanese proficiency level of the class was above the Japanese Language Proficiency Test Level N3. Since the CLIL approach places importance on ‘authenticity’, the class was designed with materials and activities; such as books, magazines, a film and TV show and a field trip to Kyoto. On the field trip, students experienced making traditional Japanese desserts, by receiving guidance directly from a Japanese artisan. Through the course, designated task sheets were used so the teacher could get feedback from each student to grasp what the class proficiency gap was. After reading an article on Japanese culture, students were asked to write down the words they did not understand and what they thought they needed to learn. It helped both students and teachers to set learning goals and work together for it. Using questionnaires and interviews with students, this research examined whether the attempt was effective or not. Essays they wrote in class were also analyzed. The results from the students were positive. They were motivated by learning authentic, natural Japanese, and they thrived setting their own personal goals. Some students were motivated to learn Japanese by studying the language and others were motivated by studying the cultural context. Most of them said they learned better this way; by setting their own Japanese language and culture goals. These results will provide teachers with new insight towards designing class materials and activities that support students in a multilevel CLIL class.Keywords: authenticity, CLIL, Japanese language and culture, multilevel class
Procedia PDF Downloads 2521790 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks
Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain
Abstract:
With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic
Procedia PDF Downloads 5451789 Monitoring of Water Quality Using Wireless Sensor Network: Case Study of Benue State of Nigeria
Authors: Desmond Okorie, Emmanuel Prince
Abstract:
Availability of portable water has been a global challenge especially to the developing continents/nations such as Africa/Nigeria. The World Health Organization WHO has produced the guideline for drinking water quality GDWQ which aims at ensuring water safety from source to consumer. Portable water parameters test include physical (colour, odour, temperature, turbidity), chemical (PH, dissolved solids) biological (algae, plytoplankton). This paper discusses the use of wireless sensor networks to monitor water quality using efficient and effective sensors that have the ability to sense, process and transmit sensed data. The integration of wireless sensor network to a portable sensing device offers the feasibility of sensing distribution capability, on site data measurements and remote sensing abilities. The current water quality tests that are performed in government water quality institutions in Benue State Nigeria are carried out in problematic locations that require taking manual water samples to the institution laboratory for examination, to automate the entire process based on wireless sensor network, a system was designed. The system consists of sensor node containing one PH sensor, one temperature sensor, a microcontroller, a zigbee radio and a base station composed by a zigbee radio and a PC. Due to the advancement of wireless sensor network technology, unexpected contamination events in water environments can be observed continuously. local area network (LAN) wireless local area network (WLAN) and internet web-based also commonly used as a gateway unit for data communication via local base computer using standard global system for mobile communication (GSM). The improvement made on this development show a water quality monitoring system and prospect for more robust and reliable system in the future.Keywords: local area network, Ph measurement, wireless sensor network, zigbee
Procedia PDF Downloads 1721788 Using 3-Glycidoxypropyltrimethoxysilane Functionalized SiO2 Nanoparticles to Improve Flexural Properties of Glass Fibers/Epoxy Grid-Stiffened Composite Panels
Authors: Reza Eslami-Farsani, Hamed Khosravi, Saba Fayazzadeh
Abstract:
Lightweight and efficient structures have the aim to enhance the efficiency of the components in various industries. Toward this end, composites are one of the most widely used materials because of durability, high strength and modulus, and low weight. One type of the advanced composites is grid-stiffened composite (GSC) structures, which have been extensively considered in aerospace, automotive, and aircraft industries. They are one of the top candidates for replacing some of the traditional components, which are used here. Although there are a good number of published surveys on the design aspects and fabrication of GSC structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Matrix modification using nanoparticles is an effective method to enhance the flexural properties of the fibrous composites. In the present study, a silane-coupling agent (3-glycidoxypropyltrimethoxysilane/3-GPTS) was introduced onto the silica (SiO2) nanoparticle surface and its effects on the three-point flexural response of isogrid E-glass/epoxy composites were assessed. Based on the fourier transform infrared spectrometer (FTIR) spectra, it was inferred that the 3-GPTS coupling agent was successfully grafted onto the surface of SiO2 nanoparticles after modification. Flexural test revealed an improvement of 16%, 14%, and 36% in stiffness, maximum load and energy absorption of the isogrid specimen filled with 3 wt.% 3-GPTS/SiO2 compared to the neat one. It would be worth mentioning that in these structures, considerable energy absorption was observed after the primary failure related to the load peak. In addition, 3-GPTMS functionalization had a positive effect on the flexural behavior of the multiscale isogrid composites. In conclusion, this study suggests that the addition of modified silica nanoparticles is a promising method to improve the flexural properties of the grid-stiffened fibrous composite structures.Keywords: isogrid-stiffened composite panels, silica nanoparticles, surface modification, flexural properties
Procedia PDF Downloads 2431787 Energy Conversion for Sewage Sludge by Microwave Heating Pyrolysis and Gasification
Authors: Young Nam Chun, Soo Hyuk Yun, Byeo Ri Jeong
Abstract:
The recent gradual increase in the energy demand is mostly met by fossil fuel, but the research on and development of new alternative energy sources is drawing much attention due to the limited fossil fuel supply and the greenhouse gas problem. Biomass is an eco-friendly renewable energy that can achieve carbon neutrality. The conversion of the biomass sludge wastes discharged from a wastewater treatment plant to clean energy is an important green energy technology in an eco-friendly way. In this NRF study, a new type of microwave thermal treatment was developed to apply the biomass-CCS technology to sludge wastes. For this, the microwave dielectric heating characteristics were examined to investigate the energy conversion mechanism for the combined drying-pyrolysis/gasification of the dewatered wet sludge. The carbon dioxide gasification was tested using the CO2 captured from the pre-combustion capture process. In addition, the results of the pyrolysis and gasification test with the wet sludge were analyzed to compare the microwave energy conversion results with the results of the use of the conventional heating method. Gas was the largest component of the product of both pyrolysis and gasification, followed by sludge char and tar. In pyrolysis, the main components of the producer gas were hydrogen and carbon monoxide, and there were some methane and hydrocarbons. In gasification, however, the amount of carbon monoxide was greater than that of hydrogen. In microwave gasification, a large amount of heavy tar was produced. The largest amount of benzene among light tar was produced in both pyrolysis and gasification. NH3 and HCN which are the precursors of NOx, generated as well. In microwave heating, the sludge char had a smooth surface, like that of glass, and in the conventional heating method with an electric furnace, deep cracks were observed in the sludge char. This indicates that the gas obtained from the microwave pyrolysis and gasification of wet sewage sludge can be used as fuel, but the heavy tar and NOx precursors in the gas must be treated. Sludge char can be used as solid fuel or as a tar reduction adsorbent in the process if necessary. This work supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2015R1R1A2A2A03003044).Keywords: microwave heating, pyrolysis gasification, precombustion CCS, sewage sludge, biomass energy
Procedia PDF Downloads 3231786 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal
Authors: A. D. Rao, Sachiko Mohanty
Abstract:
The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal
Procedia PDF Downloads 1701785 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)
Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri
Abstract:
Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase
Procedia PDF Downloads 2301784 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design
Authors: Mohammad Bagher Anvari, Arman Shojaei
Abstract:
Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.Keywords: incremental launching, bridge construction, finite element model, optimization
Procedia PDF Downloads 1031783 Assessing the Danger Factors Correlated With Dental Fear: An Observational Study
Authors: Mimoza Canga, Irene Malagnino, Giulia Malagnino, Alketa Qafmolla, Ruzhdie Qafmolla, Vito Antonio Malagnino
Abstract:
The goal of the present study was to analyze the risk factors regarding dental fear. This observational study was conducted during the period of February 2020 - April 2022 in Albania. The sample was composed of 200 participants, of which 40% were males and 60% were females. The participants' age range varied from 35 to 75 years old. We divided them into four age groups: 35-45, 46-55, 56-65, and 66-75 years old. Statistical analysis was performed using IBM SPSS Statistics 23.0. Data were scrutinized by the Post Hoc LSD test in analysis of variance (ANOVA). The P ≤ 0.05 values were considered significant. Data analysis included Confidence Interval (95% CI). The prevailing age range in the sample was mostly from 55 to 65 years old, 35.6% of the patients. In all, 50% of the patients had extreme fear about the fact that the dentist may be infected with Covid-19, 12.2% of them had low dental fear, and 37.8% had extreme dental fear. However, data collected from the current study indicated that a large proportion of patients 49.5% of them had high dental fear regarding the dentist not respecting the quarantine due to COVID-19, in comparison with 37.2% of them who had low dental fear and 13.3% who had extreme dental fear. The present study confirmed that 22.2% of the participants had an extreme fear of poor hygiene practices of the dentist that have been associated with the transmission of COVID-19 infection, 57.8% had high dental fear, and 20% of them had low dental fear. The present study showed that 50% of the patients stated that another factor that causes extreme fear was that the patients feel pain after interventions in the oral cavity. Strong associations were observed between dental fear and pain 95% CI; 0.24-0.52, P-value ˂ .0001. The results of the present study confirmed strong associations between dental fear and the fact that the dentist may be infected with Covid-19 (95% CI; 0.46-0.70, P-value ˂ .0001). Similarly, the analysis of the present study demonstrated that there was a statistically significant correlation between dental fear and poor hygiene practices of the dentist with 95% CI; 0.82-1.02, P-value ˂ .0001. On the basis of our statistical data analysis, the dentist did not respect the quarantine due to COVID-19 having a significant impact on dental fear with a P-value of ˂ .0001. This study shows important risk factors that significantly increase dental fear.Keywords: Covid-19, dental fear, pain, past dreadful experiences
Procedia PDF Downloads 1411782 God, The Master Programmer: The Relationship Between God and Computers
Authors: Mohammad Sabbagh
Abstract:
Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.Keywords: programming, the Quran, object orientation, computers and humans, GOD
Procedia PDF Downloads 1071781 Prediction of the Dark Matter Distribution and Fraction in Individual Galaxies Based Solely on Their Rotation Curves
Authors: Ramzi Suleiman
Abstract:
Recently, the author proposed an observationally-based relativity theory termed information relativity theory (IRT). The theory is simple and is based only on basic principles, with no prior axioms and no free parameters. For the case of a body of mass in uniform rectilinear motion relative to an observer, the theory transformations uncovered a matter-dark matter duality, which prescribes that the sum of the densities of the body's baryonic matter and dark matter, as measured by the observer, is equal to the body's matter density at rest. It was shown that the theory transformations were successful in predicting several important phenomena in small particle physics, quantum physics, and cosmology. This paper extends the theory transformations to the cases of rotating disks and spheres. The resulting transformations for a rotating disk are utilized to derive predictions of the radial distributions of matter and dark matter densities in rotationally supported galaxies based solely on their observed rotation curves. It is also shown that for galaxies with flattening curves, good approximations of the radial distributions of matter and dark matter and of the dark matter fraction could be obtained from one measurable scale radius. Test of the model on five galaxies, chosen randomly from the SPARC database, yielded impressive predictions. The rotation curves of all the investigated galaxies emerged as accurate traces of the predicted radial density distributions of their dark matter. This striking result raises an intriguing physical explanation of gravity in galaxies, according to which it is the proximal drag of the stars and gas in the galaxy by its rotating dark matter web. We conclude by alluding briefly to the application of the proposed model to stellar systems and black holes. This study also hints at the potential of the discovered matter-dark matter duality in fixing the standard model of elementary particles in a natural manner without the need for hypothesizing about supersymmetric particles.Keywords: dark matter, galaxies rotation curves, SPARC, rotating disk
Procedia PDF Downloads 781780 To Evaluate the Function of Cardiac Viability After Administration of I131
Authors: Baburao Ganpat Apte, Gajodhar
Abstract:
Introduction: diopathic Parkinson’s disease (PD) is the most common neurodegenerative disorder. Early PD may present a diagnostic challenge with broad differential diagnoses that are not associated with striatal dopamine deficiency. This test was performed by using special type of radioactive precursor which was made available through our logistics. 131I-TOPA L-6-[131I] Iodo-3,4-Trihydroxyphenylalnine (131I -TOPA) is a positron emission tomography (PET) agent that measures the uptake of dopamine precursors for assessment of presynaptic dopaminergic integrity and has been shown to accurately reflect the sign of nervous mind going in patients suffers from monoaminergic disturbances in PD. Both qualitative and quantitative analyses of the scans were performed. Therefore, the early clinical diagnosis alone may be accurate and this reinforces the importance of functional imaging targeting the patholigically of the disease process. The patient’s medical records were then assessed for length of follow-up, response to levotopa, clinical course of sickness, and usually though of symptoms at time of 131I -TOPA PET. A respective analysis was carried out for all patients that gone through 131I -TOPA PET brain scan for motor symptoms suspicious for PD between 2000 - 2006. The eventual diagnosis by the referring neurologist, movement therapist, physiotherapist, was used as the accurate measurements in standard for further analysis. In this study, our goal to illustrate our local experience to determine the accuracy of 131I -TOPA PET for diagnosis of PD. We studied a total of 48 patients. Of the 25 scans, it found that one was a false negative, 40 were true positives, and 7 were true negatives. The resultant values are Sensitivity 90.4% (95% CI: 100%-71.3%), Specificity 100% (92% CI: 100%-58.0%), PPV 100% (91% CI 100%-75.7%), and NPV 80.5% (95% CI: 92.5%-48.5%). Result: Twenty-three patients were found in the initial query, and 1 were excluded (2 uncertain diagnosis, 2 inadequate follow-up). Twenty-eight patients (28 scans) remained with 15 males (62%) and 8 females (30%). All the patients had a clinical follow-up of at least 3 years, however the median length of follow-up was 5.5 years (range: 2-8 years). The median age at scan time was 51.2 years (range: 35-75)Keywords: 18F-TOPA, petct, parkinson’s disease, cardiac
Procedia PDF Downloads 281779 Quality Assurance Comparison of Map Check 2, Epid, and Gafchromic® EBT3 Film for IMRT Treatment Planning
Authors: Khalid Iqbal, Saima Altaf, M. Akram, Muhammad Abdur Rafaye, Saeed Ahmad Buzdar
Abstract:
Objective: Verification of patient-specific intensity modulated radiation therapy (IMRT) plans using different 2-D detectors has become increasingly popular due to their ease of use and immediate readout of the results. The purpose of this study was to test and compare various 2-D detectors for dosimetric quality assurance (QA) of intensity-modulated radiotherapy (IMRT) with the vision to find alternative QA methods. Material and Methods: Twenty IMRT patients (12 of brain and 8 of the prostate) were planned on Eclipse treatment planning system using Varian Clinac DHX on both energies 6MV and 15MV. Verification plans of all such patients were also made and delivered to Map check2, EPID (Electronic portal imaging device) and Gafchromic EBT3. Gamma index analyses were performed using different criteria to evaluate and compare the dosimetric results. Results: Statistical analysis shows the passing rate of 99.55%, 97.23% and 92.9% for 6MV and 99.53%, 98.3% and 94.85% for 15 MV energy using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm respectively for brain, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria, the passing rate is 94.55% and 90.45% for 6MV and 95.25%9 and 95% for 15 MV energy for the case of prostate using EBT3 film. Map check 2 results shows the passing rates of 98.17%, 97.68% and 86.78% for 6MV energy and 94.87%,97.46% and 88.31% for 15 MV energy respectively for brain using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria gives the passing rate of 97.7% and 96.4% for 6MV and 98.75%9 and 98.05% for 15 MV energy for the case of prostate. EPID 6 MV and gamma analysis shows the passing rate of 99.56%, 98.63% and 98.4% for the brain, 100% and 99.9% for prostate using the same criteria as for map check 2 and EBT 3 film. Conclusion: The results demonstrate excellent passing rates were obtained for all dosimeter when compared with the planar dose distributions for 6 MV IMRT fields as well as for 15 MV. EPID results are better than EBT3 films and map check 2 because it is likely that part of this difference is real, and part is due to manhandling and different treatment set up verification which contributes dose distribution difference. Overall all three dosimeter exhibits results within limits according to AAPM report.120.Keywords: gafchromic EBT3, radiochromic film dosimetry, IMRT verification, EPID
Procedia PDF Downloads 4211778 Comparative Public Administration: A Case Study of ASEAN Member States
Authors: Nattapol Pourprasert
Abstract:
This research is to study qualitative research having two objectives: 1. to study comparison of private sector of government to compare with ASEAN Member States, 2. to study trend of private enterprise administration of ASEAN Member States. The results are: (1) Thai people focus on personal resource administrative system, (2) Indonesia focuses on official system by good administrative principles, (3) Malaysia focuses on technology development to service people, (4) Philippines focuses on operation system development, (5) Singapore focuses on public service development, (6) Brunei Darussalam focuses on equality in government service of people, (7) Vietnam focuses on creating government labor base and develop testing and administration of operation test, (8) Myanmar focuses on human resources development, (9) Laos focuses on form of local administration, (10) Cambodia focuses on policy revolution in personal resources. The result of the second part of the study are: (1) Thailand created government personnel to be power under qualitative official structural event, (2) Indonesia has Bureaucracy Reform Roadmap of Bureaucracy Reform and National Development Plan Medium Term, (3) Malaysia has database for people service, (4) Philippines follows up control of units operation by government policy, (5) Singapore created reliability, participation of people to set government policy people’s demand, (6) Brunei Darussalam has social welfare to people, (7) Vietnam revolved testing system and administration including manpower base construction of government effectively, (8) Myanmar creates high rank administrators to develop country, (9) Laos distributes power to locality, and (10) Cambodia revolved personnel resource policy.Keywords: public administration development, ASEAN member states, private sector, government
Procedia PDF Downloads 2531777 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 1601776 Utilization of Nipa Palm Fibers (Nypa fruticans) and Asian Green Mussels Shells (Perna viridis) as an Additive Material in Making a Fiber-Reinforced Concrete
Authors: Billy Angel B. Bayot, Hubert Clyde Z. Guillermo, Daniela Eve Margaret S. Olano, Lian Angeli Kaye E. Suarez
Abstract:
A utilization of Nipa palm fibers (Nypa fruticans) and Asian green mussel shells (Perna viridis) as additive materials in making fiber-reinforced concrete was carried out. The researchers collected Asian green mussel shells and Nipa palm fibers as additive materials in the production of fiber-reinforced concrete and were used to make 3 Setups containing 20g, 15g, and 10g of Nipa palm fiber varying to 10g, 20g, 30g of Asian green mussel shell powder and a traditional concrete with respect to curing period 7, 14, and 28 days. The concrete blocks were delivered to the UP Institute of Building Materials and Structures Laboratory (CoMSLab) following each curing test in order to evaluate their compressive strength. Researchers employed a Two-Way Analysis of Variance (ANOVA) and determined that curing days, concrete mixture, and the combined curing days with concrete have an effect on the compressive strength of concrete. ANOVA results indicating significant differences had been subjected to post hoc analysis using Tukey's HSD. These results then yielded the comparison of each curing time and different concrete mixtures with traditional concrete, which comes to the conclusion that a longer curing period leads to a higher compressive strength and Setup 3 (30g Asian green mussel shell with 10g Nipa palm fiber) has the larger mean compressive strength, making it the best proportion among the fiber-reinforced concrete mixtures and the only proportion that has significant effect to traditional one. As a result, the study concludes that certain curing times and concrete mix proportions of Asian green mussel shell and Nipa palm fiber are critical determinants in determining concrete compressive strength.Keywords: Asian green mussel shells (Perna viridis), Nipa palm fibers (Nypa fruticans), additives, fiber-reinforced concrete
Procedia PDF Downloads 631775 Revealing the Risks of Obstructive Sleep Apnea
Authors: Oyuntsetseg Sandag, Lkhagvadorj Khosbayar, Naidansuren Tsendeekhuu, Densenbal Dansran, Bandi Solongo
Abstract:
Introduction: Obstructive sleep apnea (OSA) is a common disorder affecting at least 2% to 4% of the adult population. It is estimated that nearly 80% of men and 93% of women with moderate to severe sleep apnea are undiagnosed. A number of screening questionnaires and clinical screening models have been developed to help identify patients with OSA, also it’s indeed to clinical practice. Purpose of study: Determine dependence of obstructive sleep apnea between for severe risk and risk factor. Material and Methods: A cross-sectional study included 114 patients presenting from theCentral state 3th hospital and Central state 1th hospital. Patients who had obstructive sleep apnea (OSA)selected in this study. Standard StopBang questionnaire was obtained from all patients.According to the patients’ response to the StopBang questionnaire was divided into low risk, intermediate risk, and high risk.Descriptive statistics were presented mean ± standard deviation (SD). Each questionnaire was compared on the likelihood ratio for a positive result, the likelihood ratio for a negative test result of regression. Statistical analyses were performed utilizing SPSS 16. Results: 114 patients were obtained (mean age 48 ± 16, male 57)that divided to low risk 54 (47.4%), intermediate risk 33 (28.9%), high risk 27 (23.7%). Result of risk factor showed significantly increasing that mean age (38 ± 13vs. 54 ± 14 vs. 59 ± 10, p<0.05), blood pressure (115 ± 18vs. 133 ± 19vs. 142 ± 21, p<0.05), BMI(24 IQR 22; 26 vs. 24 IQR 22; 29 vs. 28 IQR 25; 34, p<0.001), neck circumference (35 ± 3.4 vs. 38 ± 4.7 vs. 41 ± 4.4, p<0.05)were increased. Results from multiple logistic regressions showed that age is significantly independently factor for OSA (odds ratio 1.07, 95% CI 1.02-1.23, p<0.01). Predictive value of age was significantly higher factor for OSA (AUC=0.833, 95% CI 0.758-0.909, p<0.001). Our study showing that risk of OSA is beginning 47 years old (sensitivity 78.3%, specifity74.1%). Conclusions: According to most of all patients’ response had intermediate risk and high risk. Also, age, blood pressure, neck circumference and BMI were increased such as risk factor was increased for OSA. Especially age is independently factor and highest significance for OSA. Patients’ age one year is increased likelihood risk factor 1.1 times is increased.Keywords: obstructive sleep apnea, Stop-Bang, BMI (Body Mass Index), blood pressure
Procedia PDF Downloads 3101774 Influence of Chelators, Zn Sulphate and Silicic Acid on Productivity and Meat Quality of Fattening Pigs
Authors: A. Raceviciute-Stupeliene, V. Sasyte, V. Viliene, V. Slausgalvis, J. Al-Saifi, R. Gruzauskas
Abstract:
The objective of this study was to investigate the influence of special additives such as chelators, zinc sulphate and silicic acid on productivity parameters, carcass characteristics and meat quality of fattening pigs. The test started with 40 days old fattening pigs (mongrel (mother) and Yorkshire (father)) and lasted up to 156 days of age. During the fattening period, 32 pigs were divided into 2 groups (control and experimental) with 4 replicates (total of 8 pens). The pigs were fed for 16 weeks’ ad libitum with a standard wheat-barley-soybean meal compound (Control group) supplemented with chelators, zinc sulphate and silicic acid (dosage 2 kg/t of feed, Experimental group). Meat traits in live pigs were measured by ultrasonic equipment Piglog 105. The results obtained throughout the experimental period suggest that supplementation of chelators, zinc sulphate and silicic acid tend to positively affect average daily gain and feed conversion ratio of pigs for fattening (p < 0.05). Pigs’ evaluation with Piglog 105 showed that thickness of fat in the first and second point was by 4% and 3% respectively higher in comparison to the control group (p < 0.05). Carcass weight, yield, and length, also thickness of fat showed no significant difference among the groups. The water holding capacity of meat in Experimental group was lower by 5.28%, and tenderness – lower by 12% compared with that of the pigs in the Control group (p < 0.05). Regarding pigs’ meat chemical composition of the experimental group, a statistically significant difference comparing with the data of the control group was not determined. Cholesterol concentration in muscles of pigs fed diets supplemented with chelators, zinc sulphate and silicic acid was lower by 7.93 mg/100 g of muscle in comparison to that of the control group. These results suggest that supplementation of chelators, zinc sulphate and silicic acid in the feed for fattening pigs had significant effect on pigs growing performance and meat quality.Keywords: silicic acid, chelators, meat quality, pigs, zinc sulphate
Procedia PDF Downloads 1801773 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1901772 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 2171771 Electromyography Analysis during Walking and Seated Stepping in the Elderly
Authors: P. Y. Chiang, Y. H. Chen, Y. J. Lin, C. C. Chang, W. C. Hsu
Abstract:
The number of the elderly in the world population and the rate of falls in this increasing numbers of older people are increasing. Decreasing muscle strength and an increasing risk of falling are associated with the ageing process. Because the effects of seated stepping training on the walking performance in the elderly remain unclear, the main purpose of the proposed study is to perform electromyography analysis during walking and seated stepping in the elderly. Four surface EMG electrodes were sticked on the surface of lower limbs muscles, including vastus lateralis (VL), and gastrocnemius (GT) of both sides. Before test, maximal voluntary contraction (MVC) of the respective muscle was obtained using manual muscle testing. The analog raw data of EMG signals were digitized with a sampling frequency of 2000 Hz. The signals were fully rectified and the linear envelope were calculated. Stepping motion cycle was separated into two phases by stepping timing (ST) and pedal return timing (PRT). ST refer to the time when the pedal marker reached the highest height, representing the contra-lateral leg was going to release the pedal. PRT refer to the time when the pedal marker reached the lowest height, representing the contra-lateral leg was going to step the pedal. We assumed that ST acted the same role in initial contact during walking, and PRT for toe-off. The period from ST to next PRT was called pushing phase (PP), during which the leg would start to step with resistance, and we compare this phase with the stance phase in level walking. The period from PRT to next ST was called returning phase (RP), during which leg would not have any resistance in this phase, and we compare this phase with the swing phase in level walking. VL and Gastro muscular activation had similar patterns in both side. The ability may transfer to those needed during loading response, mid-stance and terminal swing phase. User needed to make more effort in stepping compared with walking with similar timing; thus the strengthening of the VL and Gastro may be helpful to improve the walking endurance and efficiency for the elderly.Keywords: elderly, electromyography, seated stepping, walking
Procedia PDF Downloads 2211770 Experimental Verification of Similarity Criteria for Sound Absorption of Perforated Panels
Authors: Aleksandra Majchrzak, Katarzyna Baruch, Monika Sobolewska, Bartlomiej Chojnacki, Adam Pilch
Abstract:
Scaled modeling is very common in the areas of science such as aerodynamics or fluid mechanics, since defining characteristic numbers enables to determine relations between objects under test and their models. In acoustics, scaled modeling is aimed mainly at investigation of room acoustics, sound insulation and sound absorption phenomena. Despite such a range of application, there is no method developed that would enable scaling acoustical perforated panels freely, maintaining their sound absorption coefficient in a desired frequency range. However, conducted theoretical and numerical analyses have proven that it is not physically possible to obtain given sound absorption coefficient in a desired frequency range by directly scaling only all of the physical dimensions of a perforated panel, according to a defined characteristic number. This paper is a continuation of the research mentioned above and presents practical evaluation of theoretical and numerical analyses. The measurements of sound absorption coefficient of perforated panels were performed in order to verify previous analyses and as a result find the relations between full-scale perforated panels and their models which will enable to scale them properly. The measurements were conducted in a one-to-eight model of a reverberation chamber of Technical Acoustics Laboratory, AGH. Obtained results verify theses proposed after theoretical and numerical analyses. Finding the relations between full-scale and modeled perforated panels will allow to produce measurement samples equivalent to the original ones. As a consequence, it will make the process of designing acoustical perforated panels easier and will also lower the costs of prototypes production. Having this knowledge, it will be possible to emulate in a constructed model panels used, or to be used, in a full-scale room more precisely and as a result imitate or predict the acoustics of a modeled space more accurately.Keywords: characteristic numbers, dimensional analysis, model study, scaled modeling, sound absorption coefficient
Procedia PDF Downloads 196