Search results for: push time
14733 Frame to Frameless: Stereotactic Operation Progress in Robot Time
Authors: Zengmin Tian, Bin Lv, Rui Hui, Yupeng Liu, Chuan Wang, Qing Liu, Hongyu Li, Yan Qi, Li Song
Abstract:
Objective Robot was used for replacement of the frame in recent years. The paper is to investigate the safety and effectiveness of frameless stereotactic surgery in the treatment of children with cerebral palsy. Methods Clinical data of 425 children with spastic cerebral palsy were retrospectively analyzed. The patients were treated with robot-assistant frameless stereotactic surgery of nuclear mass destruction. The motor function was evaluated by gross motor function measure-88 (GMFM-88) before the operation, 1 week and 3 months after the operation respectively. The statistical analysis was performed. Results The postoperative CT showed that the destruction area covered the predetermined target in all the patients. Minimal bleeding of puncture channel occurred in 2 patient, and mild fever in 3 cases. Otherwise, there was no severe surgical complication occurred. The GMFM-88 scores were 49.1±22.5 before the operation, 52.8±24.2 and 64.2±21.4 at the time of 1 week and 3 months after the operation, respectively. There was statistical difference between before and after the operation (P<0.01). After 3 months, the total effective rate was 98.1%, and the average improvement rate of motor function was 24.3% . Conclusion Replaced the traditional frame, the robot-assistant frameless stereotactic surgery is safe and reliable for children with spastic cerebral palsy, which has positive significance in improving patients’ motor function.Keywords: cerebral palsy, robotics, stereotactic techniques, frameless operation
Procedia PDF Downloads 9214732 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets
Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu
Abstract:
Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.Keywords: GEO SAR, radar, simulation, ship
Procedia PDF Downloads 17914731 Recommendations Using Online Water Quality Sensors for Chlorinated Drinking Water Monitoring at Drinking Water Distribution Systems Exposed to Glyphosate
Authors: Angela Maria Fasnacht
Abstract:
Detection of anomalies due to contaminants’ presence, also known as early detection systems in water treatment plants, has become a critical point that deserves an in-depth study for their improvement and adaptation to current requirements. The design of these systems requires a detailed analysis and processing of the data in real-time, so it is necessary to apply various statistical methods appropriate to the data generated, such as Spearman’s Correlation, Factor Analysis, Cross-Correlation, and k-fold Cross-validation. Statistical analysis and methods allow the evaluation of large data sets to model the behavior of variables; in this sense, statistical treatment or analysis could be considered a vital step to be able to develop advanced models focused on machine learning that allows optimized data management in real-time, applied to early detection systems in water treatment processes. These techniques facilitate the development of new technologies used in advanced sensors. In this work, these methods were applied to identify the possible correlations between the measured parameters and the presence of the glyphosate contaminant in the single-pass system. The interaction between the initial concentration of glyphosate and the location of the sensors on the reading of the reported parameters was studied.Keywords: glyphosate, emergent contaminants, machine learning, probes, sensors, predictive
Procedia PDF Downloads 12614730 Cooking Attributes of Rice Stored under Varying Temperature and Moisture Regimes
Authors: Lakshmi E. Jayachandran, Manepally Rajkumar, Pavuluri Srinivasa Rao
Abstract:
The objective of this research was to study the changes in eating quality of rice during storage under varying temperature and moisture regimes. Paddy (IR-36) with high amylose content (27%) was stored at a temperature range between 10 to 40°C and moisture content from 9 to 18% (d.b.) for 6 months. Drastic changes in color and parameters representing cooking qualities, cooked rice texture, and surface morphology occurred after 4 months of storage, especially at elevated temperature conditions. Head rice yield was stable throughout the storage except at extreme conditions of temperature and moisture content. Yellowing of rice was prominent at combinations of high temperature and moisture content, both of which had a synergistic effect on the b* values of rice. The cooking time, length expansion ratio and volume expansion ratio of all the rice samples increased with prolonged storage. The texture parameter, primarily, the hardness, cohesiveness, and adhesiveness of cooked rice samples were higher following storage at elevated temperature. Surface morphology was also significantly affected in stored rice as compared to fresh rice. Storage of rice at 10°C with a grain moisture content of 10% for 2 months gave cooked rice samples with good palatability and minimal cooking time. The temperature was found to be the most prominent storage parameter for rough rice, followed by moisture content and storage duration, influencing the quality of rice.Keywords: rice, cooking quality, storage, surface morphology
Procedia PDF Downloads 18314729 Assessing the Effect of Waste-based Geopolymer on Asphalt Binders
Authors: Amani A. Saleh, Maram M. Saudy, Mohamed N. AbouZeid
Abstract:
Asphalt cement concrete is a very commonly used material in the construction of roads. It has many advantages, such as being easy to use as well as providing high user satisfaction in terms of comfortability and safety on the road. However, there are some problems that come with asphalt cement concrete, such as its high carbon footprint, which makes it environmentally unfriendly. In addition, pavements require frequent maintenance, which could be very costly and uneconomic. The aim of this research is to study the effect of mixing waste-based geopolymers with asphalt binders. Geopolymer mixes were prepared by combining alumino-silicate sources such as fly ash, silica fumes, and metakaolin with alkali activators. The purpose of mixing geopolymers with the asphalt binder is to enhance the rheological and microstructural properties of asphalt. This was done through two phases, where the first phase was developing an optimum mix design of the geopolymer additive itself. The following phase was testing the geopolymer-modified asphalt binder after the addition of the optimum geopolymer mix design to it. The testing of the modified binder is performed according to the Superpave testing procedures, which include the dynamic shear rheometer to measure parameters such as rutting and fatigue cracking, and the rotational viscometer to measure workability. In addition, the microstructural properties of the modified binder is studied using the environmental scanning electron microscopy test (ESEM). In the testing phase, the aim is to observe whether the addition of different geopolymer percentages to the asphalt binder will enhance the properties of the binder and yield desirable results. Furthermore, the tests on the geopolymer-modified binder were carried out at fixed time intervals, therefore, the curing time was the main parameter being tested in this research. It was observed that the addition of geopolymers to asphalt binder has shown an increased performance of asphalt binder with time. It is worth mentioning that carbon emissions are expected to be reduced since geopolymers are environmentally friendly materials that minimize carbon emissions and lead to a more sustainable environment. Additionally, the use of industrial by-products such as fly ash and silica fumes is beneficial in the sense that they are recycled into producing geopolymers instead of being accumulated in landfills and therefore wasting space.Keywords: geopolymer, rutting, superpave, fatigue cracking, sustainability, waste
Procedia PDF Downloads 13114728 Jewish Law in the State of Israel: Law, Religion and State
Authors: Yuval Sinai
Abstract:
As part of the historical, religious and cultural heritage of the Jewish people, Jewish law is part of the legal system in Israel, which is a Jewish and democratic state. The proper degree of use of Jewish law in judicial decisions is an issue that crops up in Israeli law from time to time. This was a burning question in the 1980s in the wake of the enactment of the Foundations of Law Act 1980, which declared Jewish heritage a supplementary legal method to Israeli law. The enactment of the Basic Law: Human Dignity and Liberty 1992, which decreed that the basic Israeli legal principles must be interpreted in light of the values of a Jewish and democratic state, marks a significant change in the impact of Judaism in the law created and applied by the courts. Both of these legislative developments revived the initiative to grant a central status to Jewish law within the state law. How should Jewish law be applied in Israel’s secular courts? This is not a simple question. It is not merely a question of identifying the relevant rule of Jewish law or tracing its development from the Talmud to modern times. Nor is it the same as asking how a rabbinic court would handle the issue. It is a matter of delicate judgment to distill out of the often conflicting Jewish law sources a rule that will fit into the existing framework of Israeli law so as to advance a policy that will best promote the interests of Israel’s society. We shall point out the occasional tensions between Jewish religious law and secular law, and introduce opinions as to how reconciliation of the two can best be achieved in light of Jewish legal tradition and in light of the reality in the modern State of Israel.Keywords: law and religion, israel, jewish law, law and society
Procedia PDF Downloads 7414727 Photocatalytic Degradation of Phenolic Compounds in Wastewater Using Magnetically Recoverable Catalyst
Authors: Ahmed K. Sharaby, Ahmed S. El-Gendy
Abstract:
Phenolic compounds (PCs) exist in the wastewater effluents of some industries such as oil refinery, pharmaceutical and cosmetics. Phenolic compounds are extremely hazardous pollutants that can cause severe problems to the aquatic life and human beings if disposed of without treatment. One of the most efficient treatment methods of PCs is photocatalytic degradation. The current work studies the performance of composite nanomaterial of titanium dioxide with magnetite as a photo-catalyst in the degradation of PCs. The current work aims at optimizing the synthesized photocatalyst dosage and contact time as part of the operational parameters at different initial concentrations of PCs and pH values in the wastewater. The study was performed in a lab-scale batch reactor under fixed conditions of light intensity and aeration rate. The initial concentrations of PCs and the pH values were in the range of (10-200 mg/l) and (3-9), respectively. Results of the study indicate that the dosage of the catalyst and contact time for total mineralization is proportional to the initial concentrations of PCs, while the optimum pH conditions for highly efficient degradation is at pH 3. Exceeding the concentration levels of the catalyst beyond certain limits leads to the decrease in the degradation efficiency due to the dissipation of light. The performance of the catalyst for degradation was also investigated in comparison to the pure TiO2 Degussa (P-25). The dosage required for the synthesized catalyst for photocatalytic degradation was approximately 1.5 times that needed from the pure titania.Keywords: industrial, optimization, phenolic compounds, photocatalysis, wastewater
Procedia PDF Downloads 32014726 Totally Robotic Gastric Bypass Using Modified Lonroth Technique
Authors: Arun Prasad
Abstract:
Background: Robotic Bariatric Surgery is a good option for the super obese where laparoscopy demands challenging technical skills. Gastric bypass can be difficult due to inability of the robot to work in two quadrants at the same time. Lonroth technique of gastric bypass involves a totally supracolic surgery where all anastomosis are done in one quadrant only. Methods: We have done 78 robotic gastric bypass surgeries using the modified Lonroth technique. The robot is docked above the head of the patient in the midline. Camera port is placed supra umbilically. Two ports are placed on the left side of the patient and one port on the right side of the patient. An assistant port is placed between the camera port and right sided robotic port for use of stapler. Gastric pouch is made first followed by the gastrojejunostomy that is a four layered sutured anastomosis. Jejuno jejunostomy is then performed followed by a leak test and then the jejunum is divided. A 150 cm biliopancreatic limb and a 75 cm alimentary limb are finally obtained. Mesenteric and Petersen’s defects are then closed. Results: All patients had a successful robotic procedure. Mean time taken in the first 5 cases was 130 minutes. This reduced to a mean of 95 minutes in the last five cases. There were no intraoperative or post operative complications. Conclusions: While a hybrid technique of partly laparoscopic and partly robotic gastric bypass has been done at many centres, we feel using the modified Lonroth technique, a totally robotic gastric bypass surgery fully utilizes the potential of robotic bariatric surgery.Keywords: robot, bariatric, totally robotic, gastric bypass
Procedia PDF Downloads 25914725 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface
Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves
Abstract:
In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment
Procedia PDF Downloads 13614724 Numerical Modelling of Dust Propagation in the Atmosphere of Tbilisi City in Case of Western Background Light Air
Authors: N. Gigauri, V. Kukhalashvili, A. Surmava, L. Intskirveli, L. Gverdtsiteli
Abstract:
Tbilisi, a large city of the South Caucasus, is a junction point connecting Asia and Europe, Russia and republics of the Asia Minor. Over the last years, its atmosphere has been experienced an increasing anthropogenic load. Numerical modeling method is used for study of Tbilisi atmospheric air pollution. By means of 3D non-linear non-steady numerical model a peculiarity of city atmosphere pollution is investigated during background western light air. Dust concentration spatial and time changes are determined. There are identified the zones of high, average and less pollution, dust accumulation areas, transfer directions etc. By numerical modeling, there is shown that the process of air pollution by the dust proceeds in four stages, and they depend on the intensity of motor traffic, the micro-relief of the city, and the location of city mains. In the interval of time 06:00-09:00 the intensive growth, 09:00-15:00 a constancy or weak decrease, 18:00-21:00 an increase, and from 21:00 to 06:00 a reduction of the dust concentrations take place. The highly polluted areas are located in the vicinity of the city center and at some peripherical territories of the city, where the maximum dust concentration at 9PM is equal to 2 maximum allowable concentrations. The similar investigations conducted in case of various meteorological situations will enable us to compile the map of background urban pollution and to elaborate practical measures for ambient air protection.Keywords: air pollution, dust, numerical modeling, urban
Procedia PDF Downloads 19014723 Insect Inducible Methanol Production in Plants for Insect Resistance
Authors: Gourav Jain, Sameer Dixit, Surjeet Kumar Arya, Praveen C. Verma
Abstract:
Plant cell wall plays a major role in defence mechanism against biotic and abiotic stress as it constitutes the physical barrier between the microenvironment and internal component of the cell. It is a complex structure composed of mostly carbohydrates among which cellulose and hemicelluloses are most abundant that is embedded in a matrix of pectins and proteins. Multiple enzymes have been reported which plays a vital role in cell wall modification, Pectin Methylesterase (PME) is one of them which catalyses the demethylesterification of homogalacturonans component of pectin which releases acidic pectin and methanol. As emitted methanol is toxic to the insect pest, we use PME gene for the better methanol production. In the current study we showed overexpression of PME gene isolated from Withania somnifera under the insect inducible promoter causes enhancement of methanol production at the time of insect feeds to plants, and that provides better insect resistance property. We found that the 85-90% mortality causes by transgenic tobacco in both chewing (Spodoptera litura larvae and Helicoverpa armigera) and sap-sucking (Aphid, mealybug, and whitefly) pest. The methanol content and emission level were also enhanced by 10-15 folds at different inducible time point interval (15min, 30min, 45min, 60min) which would be analysed by Purpald/Alcohol Oxidase method.Keywords: methanol, Pectin methylesterase, inducible promoters, Purpald/Alcohol oxidase
Procedia PDF Downloads 24614722 Statistical Correlation between Logging-While-Drilling Measurements and Wireline Caliper Logs
Authors: Rima T. Alfaraj, Murtadha J. Al Tammar, Khaqan Khan, Khalid M. Alruwaili
Abstract:
OBJECTIVE/SCOPE (25-75): Caliper logging data provides critical information about wellbore shape and deformations, such as stress-induced borehole breakouts or washouts. Multiarm mechanical caliper logs are often run using wireline, which can be time-consuming, costly, and/or challenging to run in certain formations. To minimize rig time and improve operational safety, it is valuable to develop analytical solutions that can estimate caliper logs using available Logging-While-Drilling (LWD) data without the need to run wireline caliper logs. As a first step, the objective of this paper is to perform statistical analysis using an extensive datasetto identify important physical parameters that should be considered in developing such analytical solutions. METHODS, PROCEDURES, PROCESS (75-100): Caliper logs and LWD data of eleven wells, with a total of more than 80,000 data points, were obtained and imported into a data analytics software for analysis. Several parameters were selected to test the relationship of the parameters with the measured maximum and minimum caliper logs. These parameters includegamma ray, porosity, shear, and compressional sonic velocities, bulk densities, and azimuthal density. The data of the eleven wells were first visualized and cleaned.Using the analytics software, several analyses were then preformed, including the computation of Pearson’s correlation coefficients to show the statistical relationship between the selected parameters and the caliper logs. RESULTS, OBSERVATIONS, CONCLUSIONS (100-200): The results of this statistical analysis showed that some parameters show good correlation to the caliper log data. For instance, the bulk density and azimuthal directional densities showedPearson’s correlation coefficients in the range of 0.39 and 0.57, which wererelatively high when comparedto the correlation coefficients of caliper data with other parameters. Other parameters such as porosity exhibited extremely low correlation coefficients to the caliper data. Various crossplots and visualizations of the data were also demonstrated to gain further insights from the field data. NOVEL/ADDITIVE INFORMATION (25-75): This study offers a unique and novel look into the relative importance and correlation between different LWD measurements and wireline caliper logs via an extensive dataset. The results pave the way for a more informed development of new analytical solutions for estimating the size and shape of the wellbore in real-time while drilling using LWD data.Keywords: LWD measurements, caliper log, correlations, analysis
Procedia PDF Downloads 12514721 Conflation Methodology Applied to Flood Recovery
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.Keywords: community resilience, conflation, flood risk, nuisance flooding
Procedia PDF Downloads 10614720 Importance of an E-Learning Program in Stress Field for Postgraduate Courses of Doctors
Authors: Ramona-Niculina Jurcau, Ioana-Marieta Jurcau
Abstract:
Background: Preparing in the stress field (SF) is, increasingly, a concern for doctors of different specialties. Aims: The aim was to evaluate the importance of an e-learning program for doctors postgraduate courses, in SF. Methods: Doctors (n= 40 male, 40 female) of different specialties and ages (31-71 years), who attended postgraduate courses in SF, voluntarily responded to a questionnaire that included the following themes: Importance of SF courses for specialty practiced by each respondent doctor (using visual analogue scale, VAS); What SF themes would be indicated as e-learning (EL); Preferred form of SF information assimilation: Classical lectures (CL), EL or a combination of these methods (CL+EL); Which information on the SF course are facilitated by EL model versus CL; In their view which are the first four advantages and the first four disadvantages of EL compared to CL, for SF. Results: To most respondents, the SF courses are important for the specialty they practiced (VAS by an average of 4). The SF themes suggested to be done as EL were: Stress mechanisms; stress factor models for different medical specialties; stress assessment methods; primary stress management methods for different specialties. Preferred form of information assimilation was CL+EL. Aspects of the course facilitated by EL versus CL model: Active reading of theoretical information, with fast access to keywords details; watching documentaries in everyone's favorite order; practice through tests and the rapid control of results. The first four EL advantages, mentioned for SF were: Autonomy in managing the time allocated to the study; saving time for traveling to the venue; the ability to read information in various contexts of time and space; communication with colleagues, in good times for everyone. The first three EL disadvantages, mentioned for SF were: It decreases capabilities for group discussion and mobilization for active participation; EL information accession may depend on electrical source or/and Internet; learning slowdown can appear, by temptation of postponing the implementation. Answering questions was partially influenced by the respondent's age and genre. Conclusions: 1) Post-graduate courses in SF are of interest to doctors of different specialties. 2) The majority of participating doctors preferred EL, but combined with CL (CL+EL). 3) Preference for EL was manifested mainly by young or middle age men doctors. 4) It is important to balance the proper formula for chosen EL, to be the most efficient, interesting, useful and agreeable.Keywords: stress field, doctors’ postgraduate courses, classical lectures, e-learning lecture
Procedia PDF Downloads 24114719 Seismic Performance of Slopes Subjected to Earthquake Mainshock Aftershock Sequences
Authors: Alisha Khanal, Gokhan Saygili
Abstract:
It is commonly observed that aftershocks follow the mainshock. Aftershocks continue over a period of time with a decreasing frequency and typically there is not sufficient time for repair and retrofit between a mainshock–aftershock sequence. Usually, aftershocks are smaller in magnitude; however, aftershock ground motion characteristics such as the intensity and duration can be greater than the mainshock due to the changes in the earthquake mechanism and location with respect to the site. The seismic performance of slopes is typically evaluated based on the sliding displacement predicted to occur along a critical sliding surface. Various empirical models are available that predict sliding displacement as a function of seismic loading parameters, ground motion parameters, and site parameters but these models do not include the aftershocks. The seismic risks associated with the post-mainshock slopes ('damaged slopes') subjected to aftershocks is significant. This paper extends the empirical sliding displacement models for flexible slopes subjected to earthquake mainshock-aftershock sequences (a multi hazard approach). A dataset was developed using 144 pairs of as-recorded mainshock-aftershock sequences using the Pacific Earthquake Engineering Research Center (PEER) database. The results reveal that the combination of mainshock and aftershock increases the seismic demand on slopes relative to the mainshock alone; thus, seismic risks are underestimated if aftershocks are neglected.Keywords: seismic slope stability, mainshock, aftershock, landslide, earthquake, flexible slopes
Procedia PDF Downloads 14714718 Analgesic Efficacy of Opiorphin and Its Analogue
Authors: Preet Singh, Kavitha Kongara, Dave Harding, Neil Ward, Paul Chambers
Abstract:
The objective of this study was to compare the analgesic efficacy of opiorphin and its analogue with a mu-receptor agonist; morphine. Opiorphins (Gln-Arg-Phe-Ser-Arg) belong to the family of endogenous enkephalinase inhibitors, found in saliva of humans. They are inhibitors of two Zinc metal ectopeptidases (Neutral endopeptidase NEP, and amino-peptidase APN) which are responsible for the inactivation of the endogenous opioids; endorphins and enkephalins. Morphine and butorphanol exerts their analgesic effects by mimicking the actions of endorphins and enkephalins. The opiorphin analogue was synthesized based on the structure activity relationship of the amino acid sequence of opiorphin. The pharmacological profile of the analogue was tested by replacing Serine at position 4 with Proline. The hot plate and tail flick test were used to demonstrate the analgesic efficacy. There was a significant increase in the time for the tail flick response after an injection of opiorphin, which was similar to the morphine effect. There was no increase in time in the hot plate test after an injection of opiorphin. The results suggest that opiorphin works at spinal level only rather than both spinal and supraspinal. Further work is required to confirm our results. We did not find analgesic activity of the opiorphin analogue. Thus, Serine at position 4 is also important for its pharmacological action. Further work is required to illustrate the role of serine at position 4 in opiorphin.Keywords: analgesic peptides, endogenous opioids, morphine, opiorphin
Procedia PDF Downloads 32614717 Resource Sharing Issues of Distributed Systems Influences on Healthcare Sector Concurrent Environment
Authors: Soo Hong Da, Ng Zheng Yao, Burra Venkata Durga Kumar
Abstract:
The Healthcare sector is a business that consists of providing medical services, manufacturing medical equipment and drugs as well as providing medical insurance to the public. Most of the time, the data stored in the healthcare database is to be related to patient’s information which is required to be accurate when it is accessed by authorized stakeholders. In distributed systems, one important issue is concurrency in the system as it ensures the shared resources to be synchronized and remains consistent through multiple read and write operations by multiple clients. The problems of concurrency in the healthcare sector are who gets the access and how the shared data is synchronized and remains consistent when there are two or more stakeholders attempting to the shared data simultaneously. In this paper, a framework that is beneficial to distributed healthcare sector concurrent environment is proposed. In the proposed framework, four different level nodes of the database, which are national center, regional center, referral center, and local center are explained. Moreover, the frame synchronization is not symmetrical. There are two synchronization techniques, which are complete and partial synchronization operation are explained. Furthermore, when there are multiple clients accessed at the same time, synchronization types are also discussed with cases at different levels and priorities to ensure data is synchronized throughout the processes.Keywords: resources, healthcare, concurrency, synchronization, stakeholders, database
Procedia PDF Downloads 15114716 Jan’s Life-History: Changing Faces of Managerial Masculinities and Consequences for Health
Authors: Susanne Gustafsson
Abstract:
Life-history research is an extraordinarily fruitful method to use for social analysis and gendered health analysis in particular. Its potential is illustrated through a case study drawn from a Swedish project. It reveals an old type of masculinity that faces difficulties when carrying out two sets of demands simultaneously, as a worker/manager and as a father/husband. The paper illuminates the historical transformation of masculinity and the consequences of this for health. We draw on the idea of the “changing faces of masculinity” to explore the dynamism and complexity of gendered health. An empirical case is used for its illustrative abilities. Jan, a middle-level manager and father employed in the energy sector in urban Sweden is the subject of this paper. Jan’s story is one of 32 semi-structured interviews included in an extended study focusing on well-being at work. The results reveal a face of masculinity conceived of in middle-level management as tacitly linked to the neoliberal doctrine. Over a couple of decades, the idea of “flexibility” was turned into a valuable characteristic that everyone was supposed to strive for. This resulted in increased workloads. Quite a few employees, and managers, in particular, find themselves working both day and night. This may explain why not having enough time to spend with children and family members is a recurring theme in the data. Can this way of doing be linked to masculinity and health? The first author’s research has revealed that the use of gender in health science is not sufficiently or critically questioned. This lack of critical questioning is a serious problem, especially since ways of doing gender affect health. We suggest that gender reproduction and gender transformation are interconnected, regardless of how they affect health. They are recognized as two sides of the same phenomenon, and minor movements in one direction or the other become crucial for understanding its relation to health. More or less, at the same time, as Jan’s masculinity was reproduced in response to workplace practices, Jan’s family position was transformed—not totally but by a degree or two, and these degrees became significant for the family’s health and well-being. By moving back and forth between varied events in Jan’s biographical history and his sociohistorical life span, it becomes possible to show that in a time of gender transformations, power relations can be renegotiated, leading to consequences for health.Keywords: changing faces of masculinity, gendered health, life-history research method, subverter
Procedia PDF Downloads 11514715 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel
Authors: Joseph C. Chen, Joshua Cox
Abstract:
This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.Keywords: Taguchi Parameter Design, surface roughness, Wire EDM, dimensional accuracy
Procedia PDF Downloads 37414714 Analyzing How Working From Home Can Lead to Higher Job Satisfaction for Employees Who Have Care Responsibilities Using Structural Equation Modeling
Authors: Christian Louis Kühner, Florian Pfeffel, Valentin Nickolai
Abstract:
Taking care of children, dependents, or pets can be a difficult and time-consuming task. Especially for part- and full-time employees, it can feel exhausting and overwhelming to meet these obligations besides working a job. Thus, working mostly at home and not having to drive to the company can save valuable time and stress. This study aims to show the influence that the working model has on the job satisfaction of employees with care responsibilities in comparison to employees who do not have such obligations. Using structural equation modeling (SEM), the three work models, “work from home”, “working remotely”, and a hybrid model, have been analyzed based on 13 influencing constructs on job satisfaction. These 13 factors have been further summarized into three groups “classic influencing factors”, “influencing factors changed by remote working”, and “new remote working influencing factors”. Based on the influencing factors on job satisfaction, an online survey was conducted with n = 684 employees from the service sector. Here, Cronbach’s alpha of the individual constructs was shown to be suitable. Furthermore, the construct validity of the constructs was confirmed by face validity, content validity, convergent validity (AVE > 0.5: CR > 0.7), and discriminant validity. In addition, confirmatory factor analysis (CFA) confirmed the model fit for the investigated sample (CMIN/DF: 2.567; CFI: 0.927; RMSEA: 0.048). The SEM-analysis has shown that the most significant influencing factor on job satisfaction is “identification with the work” with β = 0.540, followed by “Appreciation” (β = 0.151), “Compensation” (β = 0.124), “Work-Life-Balance” (β = 0.116), and “Communication and Exchange of Information” (β = 0.105). While the significance of each factor can vary depending on the work model, the SEM-analysis shows that the identification with the work is the most significant factor in all three work models and, in the case of the traditional office work model, it is the only significant influencing factor. The study shows that among the employees with care responsibilities, the higher the proportion of working from home in comparison to working from the office, the more satisfied the employees are with their job. Since the work models that meet the requirements of comprehensive care led to higher job satisfaction amongst employees with such obligations, adapting as a company to such private obligations by employees can be crucial to sustained success. Conversely, the satisfaction level of the working model where employees work at the office is higher for workers without caregiving responsibilities.Keywords: care responsibilities, home office, job satisfaction, structural equation modeling
Procedia PDF Downloads 8614713 Understanding How to Increase Restorativeness of Interiors: A Qualitative Exploratory Study on Attention Restoration Theory in Relation to Interior Design
Authors: Hande Burcu Deniz
Abstract:
People in the U.S. spend a considerable portion of their time indoors. This makes it crucial to provide environments that support the well-being of people. Restorative environments aim to help people recover their cognitive resources that were spent due to intensive use of directed attention. Spending time in nature and taking a nap are two of the best ways to restore these resources. However, they are not possible to do most of the time. The problem is that many studies have revealed how nature and spending time in natural contexts can help boost restoration, but there are fewer studies conducted to understand how cognitive resources can be restored in interior settings. This study aims to explore the answer to this question: which qualities of interiors increase the restorativeness of an interior setting and how do they mediate restorativeness of an interior. To do this, a phenomenological qualitative study was conducted. The study was interested in the definition of attention restoration and the experiences of the phenomena. As the themes emerged, they were analyzed to match with Attention Restoration Theory components (being away, extent, fascination, compatibility) to examine how interior design elements mediate the restorativeness of an interior. The data was gathered from semi-structured interviews with international residents of Minnesota. The interviewees represent young professionals who work in Minnesota and often experience mental fatigue. Also, they have less emotional connections with places in Minnesota, which enabled data to be based on the physical qualities of a space rather than emotional connections. In the interviews, participants were asked about where they prefer to be when they experience mental fatigue. Next, they were asked to describe the physical qualities of the places they prefer to be with reasons. Four themes were derived from the analysis of interviews. The themes are in order according to their frequency. The first, and most common, the theme was “connection to outside”. The analysis showed that people need to be either physically or visually connected to recover from mental fatigue. Direct connection to nature was reported as preferable, whereas urban settings were the secondary preference along with interiors. The second theme emerged from the analysis was “the presence of the artwork,” which was experienced differently by the interviewees. The third theme was “amenities”. Interviews pointed out that people prefer to have the amenities that support desired activity during recovery from mental fatigue. The last theme was “aesthetics.” Interviewees stated that they prefer places that are pleasing to their eyes. Additionally, they could not get rid of the feeling of being worn out in places that are not well-designed. When we matched the themes with the four art components (being away, extent, fascination, compatibility), some of the interior qualities showed overlapping since they were experienced differently by the interviewees. In conclusion, this study showed that interior settings have restorative potential, and they are multidimensional in their experience.Keywords: attention restoration, fatigue, interior design, qualitative study, restorative environments
Procedia PDF Downloads 26514712 Investigation of an Alkanethiol Modified Au Electrode as Sensor for the Antioxidant Activity of Plant Compounds
Authors: Dana A. Thal, Heike Kahlert, Fritz Scholz
Abstract:
Thiol molecules are known to easily form self-assembled monolayers (SAM) on Au surfaces. Depending on the thiol’s structure, surface modifications via SAM can be used for electrode sensor development. In the presented work, 1-decanethiol coated polycrystalline Au electrodes were applied to indirectly assess the radical scavenging potential of plant compounds and extracts. Different plant compounds with reported antioxidant properties as well as an extract from the plant Gynostemma pentaphyllum were tested for their effectiveness to prevent SAM degradation on the sensor electrodes via photolytically generated radicals in aqueous media. The SAM degradation was monitored over time by differential pulse voltammetry (DPV) measurements. The results were compared to established antioxidant assays. The obtained data showed an exposure time and concentration dependent degradation process of the SAM at the electrode’s surfaces. The tested substances differed in their capacity to prevent SAM degradation. Calculated radical scavenging activities of the tested plant compounds were different for different assays. The presented method poses a simple system for radical scavenging evaluation and, considering the importance of the test system in antioxidant activity evaluation, might be taken as a bridging tool between in-vivo and in-vitro antioxidant assay in order to obtain more biologically relevant results in antioxidant research.Keywords: alkanethiol SAM, plant antioxidant, polycrystalline Au, radical scavenger
Procedia PDF Downloads 30014711 The Effect of Isokinetic Fatigue of Ankle, Knee, and Hip Muscles on the Dynamic Postural Stability Index
Authors: Masoumeh Shojaei, Natalie Gedayloo, Amir Sarshin
Abstract:
The purpose of the present study was to investigate the effect of Isokinetic fatigue of muscles around the ankle, knee, and hip on the indicators of dynamic postural stability. Therefore, 15 female university students (age 19.7± 0.6 years old, weight 54.6± 9.4 kg, and height 163.9± 5.6 cm) participated in within-subjects design for 5 different days. In the first session, the postural stability indices (time to stabilization after jump-landing) without fatigue were assessed by force plate and in each next sessions, one of muscle groups of the lower limb including the muscles around ankles, knees, and hip was randomly exhausted by Biodex Isokinetic dynamometer and the indices were assessed immediately after the fatigue of each muscle group. The method involved landing on a force plate from a dynamic state, and transitioning balance into a static state. Results of ANOVA with repeated measures indicated that there was no significant difference between the time to stabilization (TTS) before and after Isokinetic fatigue of the muscles around the ankle, knee and hip in medial – lateral direction (p > 0.05), but in the anterior – posterior (AP) direction, the difference was statistically significant (p < 0.05). Least Significant Difference (LSD) post hoc test results also showed that there was significant difference between TTS in knee and hip muscles before and after isokinetic fatigue in AP direction. In the other hand knee and hip muscles group were affected by isokinetic fatigue only in AP surface (p < 0.05).Keywords: dynamic balance, fatigue, lower limb muscles, postural control
Procedia PDF Downloads 24114710 Overview of Multi-Chip Alternatives for 2.5 and 3D Integrated Circuit Packagings
Authors: Ching-Feng Chen, Ching-Chih Tsai
Abstract:
With the size of the transistor gradually approaching the physical limit, it challenges the persistence of Moore’s Law due to the development of the high numerical aperture (high-NA) lithography equipment and other issues such as short channel effects. In the context of the ever-increasing technical requirements of portable devices and high-performance computing, relying on the law continuation to enhance the chip density will no longer support the prospects of the electronics industry. Weighing the chip’s power consumption-performance-area-cost-cycle time to market (PPACC) is an updated benchmark to drive the evolution of the advanced wafer nanometer (nm). The advent of two and half- and three-dimensional (2.5 and 3D)- Very-Large-Scale Integration (VLSI) packaging based on Through Silicon Via (TSV) technology has updated the traditional die assembly methods and provided the solution. This overview investigates the up-to-date and cutting-edge packaging technologies for 2.5D and 3D integrated circuits (ICs) based on the updated transistor structure and technology nodes. The author concludes that multi-chip solutions for 2.5D and 3D IC packagings are feasible to prolong Moore’s Law.Keywords: moore’s law, high numerical aperture, power consumption-performance-area-cost-cycle time to market, 2.5 and 3D- very-large-scale integration, packaging, through silicon via
Procedia PDF Downloads 11714709 Identification of Microbial Community in an Anaerobic Reactor Treating Brewery Wastewater
Authors: Abimbola M. Enitan, John O. Odiyo, Feroz M. Swalaha
Abstract:
The study of microbial ecology and their function in anaerobic digestion processes are essential to control the biological processes. This is to know the symbiotic relationship between the microorganisms that are involved in the conversion of complex organic matter in the industrial wastewater to simple molecules. In this study, diversity and quantity of bacterial community in the granular sludge taken from the different compartments of a full-scale upflow anaerobic sludge blanket (UASB) reactor treating brewery wastewater was investigated using polymerase chain reaction (PCR) and real-time quantitative PCR (qPCR). The phylogenetic analysis showed three major eubacteria phyla that belong to Proteobacteria, Firmicutes and Chloroflexi in the full-scale UASB reactor, with different groups populating different compartment. The result of qPCR assay showed high amount of eubacteria with increase in concentration along the reactor’s compartment. This study extends our understanding on the diverse, topological distribution and shifts in concentration of microbial communities in the different compartments of a full-scale UASB reactor treating brewery wastewater. The colonization and the trophic interactions among these microbial populations in reducing and transforming complex organic matter within the UASB reactors were established.Keywords: bacteria, brewery wastewater, real-time quantitative PCR, UASB reactor
Procedia PDF Downloads 26314708 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder
Procedia PDF Downloads 29214707 The Impact of Distributed Epistemologies on Software Engineering
Authors: Thomas Smith
Abstract:
Many hackers worldwide would agree that, had it not been for linear-time theory, the refinement of Byzantine fault tolerance might never have occurred. After years of significant research into extreme programming, we validate the refinement of simulated annealing. Maw, our new framework for unstable theory, is the solution to all of these issues.Keywords: distributed, software engineering, DNS, DHCP
Procedia PDF Downloads 35814706 Self-Tuning Dead-Beat PD Controller for Pitch Angle Control of a Bench-Top Helicopter
Authors: H. Mansor, S.B. Mohd-Noor, N. I. Othman, N. Tazali, R. I. Boby
Abstract:
This paper presents an improved robust Proportional Derivative controller for a 3-Degree-of-Freedom (3-DOF) bench-top helicopter by using adaptive methodology. Bench-top helicopter is a laboratory scale helicopter used for experimental purposes which is widely used in teaching laboratory and research. Proportional Derivative controller has been developed for a 3-DOF bench-top helicopter by Quanser. Experiments showed that the transient response of designed PD controller has very large steady state error i.e., 50%, which is very serious. The objective of this research is to improve the performance of existing pitch angle control of PD controller on the bench-top helicopter by integration of PD controller with adaptive controller. Usually standard adaptive controller will produce zero steady state error; however response time to reach desired set point is large. Therefore, this paper proposed an adaptive with deadbeat algorithm to overcome the limitations. The output response that is fast, robust and updated online is expected. Performance comparisons have been performed between the proposed self-tuning deadbeat PD controller and standard PD controller. The efficiency of the self-tuning dead beat controller has been proven from the tests results in terms of faster settling time, zero steady state error and capability of the controller to be updated online.Keywords: adaptive control, deadbeat control, bench-top helicopter, self-tuning control
Procedia PDF Downloads 32714705 Evaluation of the Cities Specific Characteristics in the Formation of the Safavid Period Mints
Authors: Mahmood Seyyed, Akram Salehi Heykoei, Hamidreza Safakish Kashani
Abstract:
Among the remaining resource of the past, coins considered as an authentic documents among the most important documentary sources. The coins were minted in a place that called mint. The number and position of the mints in each period reflects the amount of economic power, political security and business growth, which was always fluctuated its position with changing the political and economic condition. Considering that, trade has more growth during the Safavid period than previous ones, the mint also has been in greater importance. It seems the one hand, the growth of economic in Safavid period has a direct link with the number and places of the mints at that time and in the other hand, the mints have been formed in some places because of the specific characteristic of cities and regions. Increasing the number of mints in the north of the country due to the growth of silk trade and in the west and northwest due to the political and commercial relation with Ottoman Empire, also the characteristics such as existence of mines, located in the Silk Road and communication ways, all are the results of this investigation. Accordingly, in this article researcher tries to examine the characteristics that give priority to a city for having mint. With considering that in the various historical periods, the mints were based in the most important cities in terms of political and social, at that time, this article examines the cities specific characteristics in the formation of the mints in Safavid period.Keywords: documentary sources, coins, mint, city, Safavid
Procedia PDF Downloads 26914704 A Feasibility Study of Producing Biofuels from Textile Sludge by Torrefaction Technology
Authors: Hua-Shan Tai, Yu-Ting Zeng
Abstract:
In modern and industrial society, enormous amounts of sludge from various of industries are constantly produced; currently, most of the sludge are treated by landfill and incineration. However, both treatments are not ideal because of the limited land for landfill and the secondary pollution caused by incineration. Consequently, treating industrial sludge appropriately has become an urgent issue of environmental protection. In order to solve the problem of the massive sludge, this study uses textile sludge which is the major source of waste sludge in Taiwan as raw material for torrefaction treatments. To investigate the feasibility of producing biofuels from textile sludge by torrefaction, the experiments were conducted with temperatures at 150, 200, 250, 300, and 350°C, with heating rates of 15, 20, 25 and 30°C/min, and with residence time of 30 and 60 minutes. The results revealed that the mass yields after torrefaction were approximately in the range of 54.9 to 93.4%. The energy densification ratios were approximately in the range of 0.84 to 1.10, and the energy yields were approximately in the range of 45.9 to 98.3%. The volumetric densities were approximately in the range of 0.78 to 1.14, and the volumetric energy densities were approximately in the range of 0.65 to 1.18. To sum up, the optimum energy yield (98.3%) can be reached with terminal temperature at 150 °C, heating rate of 20°C/min, and residence time of 30 minutes, and the mass yield, energy densification ratio as well as volumetric energy density were 92.2%, 1.07, and 1.15, respectively. These results indicated that the solid products after torrefaction are easy to preserve, which not only enhance the quality of the product, but also achieve the purpose of developing the material into fuel.Keywords: biofuel, biomass energy, textile sludge, torrefaction
Procedia PDF Downloads 325