Search results for: modal decomposition
63 A Case Study of Wildlife Crime in Bangladesh
Authors: M. Golam Rabbi
Abstract:
Theme of wildlife crime is unique in Bangladesh. In earlier of 2010, wildlife crime was not designated as a crime, unlike other offenses. Forest Department and other enforcement agencies were not in full swing to find out the organized crime scene at that time and recorded few cases along with forest crime. However, after the establishment of Wildlife Crime Control Unitin 2012a, total of 374 offenses have been detected with 566 offenders and 37,039 wildlife and trophies were seized till November 2016. Most offenses seem to be committed outside the forests where the presence of the forest staff is minimal. Total detection percentage of offenses is not known, but offenders are not identified in 60% of detected cases (UDOR). Only 20% cases are decided by the courts even after eight years, conviction rate of the total disposal is 70.65%. Mostly six months imprisonment and BDT 5000 fine seems to be the modal penalty. The monetary value of wildlife crime in the country is approximate $0.72M per year and the maximum value counted for reptiles around $0.45M especially for high-level trafficking of geckos and turtles. The most common seizures of wildlife are birds (mynas, munias, parakeets, lorikeets, water birds, etc.) which have domestic demand for pet. Some other wildlife like turtles, lizards and small mammals are also on the list. Venison and migratory waterbirds often seized which has a large quantity demand for consuming at aristocratic level.Due to porous border and weak enforcement in border region poachers use the way for trafficking of geckos, turtles, and tortoises, snakes, venom, tiger and body parts, spotted deerskin, pangolinetc. Those have very high demand in East Asian countries for so-called medicinal purposes. The recent survey also demonstrates new route for illegal trade and trafficking for instance, after poaching of tiger and deer from the Sundarbans, the largest mangrove track of the planet to Thailand through the Bay of Bengal, sharks fins and ray fish through Chittagong seaport and directly by sea routes to Myanmar and Thailand. However, a good number of records of offense demonstrate the transition route from India to South and South East Asian countries. Star tortoises and Hamilton’s turtles are smuggled in from India which mostly seized at Benapole border of Jessore and Hazrat Shah Jajal International Airport of Dhaka, in very large numbers for transmission to East Asian countries. Most of the cases of wildlife trade routes leading to China, Thailand, Malaysia, and Myanmar. Most surprisingly African ivory was seized in Bangladesh recently, which was meant to be trafficked to the South-East Asia. However; forest department is working to fight against wildlife poaching, illegal trade and trafficking in collaboration with other law enforcement agencies. The department needs a clear mandate and to build technical capabilities for identifying, seizing and holding specimens. The department also needs to step out of the forests and must develop the capacity to surveillance and patrol all sensitive locations across the country.Keywords: Bangladesh forest department, Sundarban, tiger, wildlife crime, wildlife trafficking
Procedia PDF Downloads 30762 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN
Authors: Mohamed Gaafar, Evan Davies
Abstract:
Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN
Procedia PDF Downloads 29861 Social Problems and Gender Wage Gap Faced by Working Women in Readymade Garment Sector of Pakistan
Authors: Narjis Kahtoon
Abstract:
The issue of the wage discrimination on the basis of gender and social problem has been a significant research problem for several decades. Whereas lots of have explored reasons for the persistence of an inequality in the wages of male and female, none has successfully explained away the entire differentiation. The wage discrimination on the basis of gender and social problem of working women is a global issue. Although inequality in political and economic and social make-up of countries all over the world, the gender wage discrimination, and social constraint is present. The aim of the research is to examine the gender wage discrimination and social constraint from an international perspective and to determine whether any pattern exists among cultural dimensions of a country and the man and women remuneration gap in Readymade Garment Sector of Pakistan. Population growth rate is significant indicator used to explain the change in population and play a crucial point in the economic development of a country. In Pakistan, readymade garment sector consists of small, medium and large sized firms. With an estimated 30 percent of the workforce in textile- Garment is females’. Readymade garment industry is a labor intensive industry and relies on the skills of individual workers and provides highest value addition in the textile sector. In the Garment sector, female workers are concentrated in poorly paid, labor-intensive down-stream production (readymade garments, linen, towels, etc.), while male workers dominate capital- intensive (ginning, spinning and weaving) processes. Gender wage discrimination and social constraint are reality in Pakistan Labor Market. This research allows us not only to properly detect the size of gender wage discrimination and social constraint but to also fully understand its consequences in readymade garment sector of Pakistan. Furthermore, research will evaluated this measure for the three main clusters like Lahore, Karachi, and Faisalabad. These data contain complete details of male and female workers and supervisors in the readymade garment sector of Pakistan. These sources of information provide a unique opportunity to reanalyze the previous finding in the literature. The regression analysis focused on the standard 'Mincerian' earning equation and estimates it separately by gender, the research will also imply the cultural dimensions developed by Hofstede (2001) to profile a country’s cultural status and compare those cultural dimensions to the wage inequalities. Readymade garment of Pakistan is one of the important sectors since its products have huge demand at home and abroad. These researches will a major influence on the measures undertaken to design a public policy regarding wage discrimination and social constraint in readymade garment sector of Pakistan.Keywords: gender wage differentials, decomposition, garment, cultural
Procedia PDF Downloads 20960 Factors Affecting Air Surface Temperature Variations in the Philippines
Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya
Abstract:
Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number
Procedia PDF Downloads 32359 A Hybrid LES-RANS Approach to Analyse Coupled Heat Transfer and Vortex Structures in Separated and Reattached Turbulent Flows
Authors: C. D. Ellis, H. Xia, X. Chen
Abstract:
Experimental and computational studies investigating heat transfer in separated flows have been of increasing importance over the last 60 years, as efforts are being made to understand and improve the efficiency of components such as combustors, turbines, heat exchangers, nuclear reactors and cooling channels. Understanding of not only the time-mean heat transfer properties but also the unsteady properties is vital for design of these components. As computational power increases, more sophisticated methods of modelling these flows become available for use. The hybrid LES-RANS approach has been applied to a blunt leading edge flat plate, utilising a structured grid at a moderate Reynolds number of 20300 based on the plate thickness. In the region close to the wall, the RANS method is implemented for two turbulence models; the one equation Spalart-Allmaras model and Menter’s two equation SST k-ω model. The LES region occupies the flow away from the wall and is formulated without any explicit subgrid scale LES modelling. Hybridisation is achieved between the two methods by the blending of the nearest wall distance. Validation of the flow was obtained by assessing the mean velocity profiles in comparison to similar studies. Identifying the vortex structures of the flow was obtained by utilising the λ2 criterion to identify vortex cores. The qualitative structure of the flow compared with experiments of similar Reynolds number. This identified the 2D roll up of the shear layer, breaking down via the Kelvin-Helmholtz instability. Through this instability the flow progressed into hairpin like structures, elongating as they advanced downstream. Proper Orthogonal Decomposition (POD) analysis has been performed on the full flow field and upon the surface temperature of the plate. As expected, the breakdown of POD modes for the full field revealed a relatively slow decay compared to the surface temperature field. Both POD fields identified the most energetic fluctuations occurred in the separated and recirculation region of the flow. Latter modes of the surface temperature identified these levels of fluctuations to dominate the time-mean region of maximum heat transfer and flow reattachment. In addition to the current research, work will be conducted in tracking the movement of the vortex cores and the location and magnitude of temperature hot spots upon the plate. This information will support the POD and statistical analysis performed to further identify qualitative relationships between the vortex dynamics and the response of the surface heat transfer.Keywords: heat transfer, hybrid LES-RANS, separated and reattached flow, vortex dynamics
Procedia PDF Downloads 23158 Investigation Studies of WNbMoVTa and WNbMoVTaCr₀.₅Al Refractory High Entropy Alloys as Plasma-Facing Materials
Authors: Burçak Boztemur, Yue Xu, Laima Luo, M. Lütfi Öveçoğlu, Duygu Ağaoğulları
Abstract:
Tungsten (W) is used chiefly as plasma-facing material. However, it has some problems, such as brittleness after plasma exposure. High-entropy alloys (RHEAs) are a new opportunity for this deficiency. So, the neutron shielding behavior of WNbMoVTa and WNbMoVTaCr₀.₅Al compositions were examined against He⁺ irradiation in this study. The mechanical and irradiation properties of the WNbMoVTa base composition were investigated by adding the Al and Cr elements. The mechanical alloying (MA) for 6 hours was applied to obtain RHEA powders. According to the X-ray diffraction (XRD) method, the body-centered cubic (BCC) phase and NbTa phase with a small amount of WC impurity that comes from vials and balls were determined after 6 h MA. Also, RHEA powders were consolidated with the spark plasma sintering (SPS) method (1500 ºC, 30 MPa, and 10 min). After the SPS method, (Nb,Ta)C and W₂C₀.₈₅ phases were obtained with the decomposition of WC and stearic acid that is added during MA based on XRD results. Also, the BCC phase was obtained for both samples. While the Al₂O₃ phase with a small intensity was seen for the WNbMoVTaCr₀.₅Al sample, the Ta₂VO₆ phase was determined for the base sample. These phases were observed as three different regions according to scanning electron microscopy (SEM). All elements were distributed homogeneously on the white region by measuring an electron probe micro-analyzer (EPMA) coupled with a wavelength dispersive spectroscope (WDS). Also, the grey region of the WNbMoVTa sample was rich in Ta, V, and O elements. However, the amount of Al and O elements was higher for the grey region of the WNbMoVTaCr₀.₅Al sample. The high amount of Nb, Ta, and C elements were determined for both samples. Archimedes’ densities that were measured with alcohol media were closer to the theoretical densities of RHEAs. These values were important for the microhardness and irradiation resistance of compositions. While the Vickers microhardness value of the WNbMoVTa sample was measured as ~11 GPa, this value increased to nearly 13 GPa with the WNbMoVTaCr₀.₅Al sample. These values were compatible with the wear behavior. The wear volume loss was decreased to 0.16×10⁻⁴ from 1.25×10⁻⁴ mm³ by the addition of Al and Cr elements to the WNbMoVTa. The He⁺ irradiation was conducted on the samples to observe surface damage. After irradiation, the XRD patterns were shifted to the left because of defects and dislocations. He⁺ ions were infused under the surface, so they created the lattice expansion. The peak shifting of the WNbMoVTaCr₀.₅Al sample was less than the WNbMoVTa base sample, thanks to less impact. A small amount of fuzz was observed for the base sample. This structure was removed and transformed into a wavy structure with the addition of Cr and Al elements. Also, the deformation hardening was actualized after irradiation. A lower amount of hardening was obtained with the WNbMoVTaCr₀.₅Al sample based on the changing microhardness values. The surface deformation was decreased in the WNbMoVTaCr₀.₅Al sample.Keywords: refractory high entropy alloy, microhardness, wear resistance, He⁺ irradiation
Procedia PDF Downloads 6557 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology
Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal
Abstract:
Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.Keywords: chloramine decay, modelling, response surface methodology, water quality parameters
Procedia PDF Downloads 22556 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’
Authors: Luminiţa Duţică, Gheorghe Duţică
Abstract:
One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.Keywords: heterophony, modalism, serialism, synchrony, syntax
Procedia PDF Downloads 34555 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost
Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku
Abstract:
Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost
Procedia PDF Downloads 11154 Exploitation Pattern of Atlantic Bonito in West African Waters: Case Study of the Bonito Stock in Senegalese Waters
Authors: Ousmane Sarr
Abstract:
The Senegalese coasts have high productivity of fishery resources due to the frequency of intense up-welling system that occurs along its coast, caused by the maritime trade winds making its waters nutrients rich. Fishing plays a primordial role in Senegal's socioeconomic plans and food security. However, a global diagnosis of the Senegalese maritime fishing sector has highlighted the challenges this sector encounters. Among these concerns, some significant stocks, a priority target for artisanal fishing, need further assessment. If no efforts are made in this direction, most stock will be overexploited or even in decline. It is in this context that this research was initiated. This investigation aimed to apply a multi-modal approach (LBB, Catch-only-based CMSY model and its most recent version (CMSY++); JABBA, and JABBA-Select) to assess the stock of Atlantic bonito, Sarda sarda (Bloch, 1793) in the Senegalese Exclusive Economic Zone (SEEZ). Available catch, effort, and size data from Atlantic bonito over 15 years (2004-2018) were used to calculate the nominal and standardized CPUE, size-frequency distribution, and length at retentions (50 % and 95 % selectivity) of the species. These relevant results were employed as input parameters for stock assessment models mentioned above to define the stock status of this species in this region of the Atlantic Ocean. The LBB model indicated an Atlantic bonito healthy stock status with B/BMSY values ranging from 1.3 to 1.6 and B/B0 values varying from 0.47 to 0.61 of the main scenarios performed (BON_AFG_CL, BON_GN_Length, and BON_PS_Length). The results estimated by LBB are consistent with those obtained by CMSY. The CMSY model results demonstrate that the SEEZ Atlantic bonito stock is in a sound condition in the final year of the main scenarios analyzed (BON, BON-bt, BON-GN-bt, and BON-PS-bt) with sustainable relative stock biomass (B2018/BMSY = 1.13 to 1.3) and fishing pressure levels (F2018/FMSY= 0.52 to 1.43). The B/BMSY and F/FMSY results for the JABBA model ranged between 2.01 to 2.14 and 0.47 to 0.33, respectively. In contrast, The estimated B/BMSY and F/FMSY for JABBA-Select ranged from 1.91 to 1.92 and 0.52 to 0.54. The Kobe plots results of the base case scenarios ranged from 75% to 89% probability in the green area, indicating sustainable fishing pressure and an Atlantic bonito healthy stock size capable of producing high yields close to the MSY. Based on the stock assessment results, this study highlighted scientific advice for temporary management measures. This study suggests an improvement of the selectivity parameters of longlines and purse seines and a temporary prohibition of the use of sleeping nets in the fishery for the Atlantic bonito stock in the SEEZ based on the results of the length-base models. Although these actions are temporary, they can be essential to reduce or avoid intense pressure on the Atlantic bonito stock in the SEEZ. However, it is necessary to establish harvest control rules to provide coherent and solid scientific information that leads to appropriate decision-making for rational and sustainable exploitation of Atlantic bonito in the SEEZ and the Eastern Atlantic Ocean.Keywords: multi-model approach, stock assessment, atlantic bonito, SEEZ
Procedia PDF Downloads 6253 Using the ISO 9705 Room Corner Test for Smoke Toxicity Quantification of Polyurethane
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
Polyurethane (PU) foam is typically sold as acoustic foam that is often used as sound insulation in settings such as night clubs and bars. As a construction product, PU is tested by being glued to the walls and ceiling of the ISO 9705 room corner test room. However, when heat is applied to PU foam, it melts and burns as a pool fire due to it being a thermoplastic. The current test layout is unable to accurately measure mass loss and doesn’t allow for the material to burn as a pool fire without seeping out of the test room floor. The lack of mass loss measurement means gas yields pertaining to smoke toxicity analysis can’t be calculated, which makes data comparisons from any other material or test method difficult. Additionally, the heat release measurements are not representative of the actual measurements taken as a lot of the material seeps through the floor (when a tray to catch the melted material is not used). This research aimed to modify the ISO 9705 test to provide the ability to measure mass loss to allow for better calculation of gas yields and understanding of decomposition. It also aimed to accurately measure smoke toxicity in both the doorway and duct and enable dilution factors to be calculated. Finally, the study aimed to examine if doubling the fuel loading would force under-ventilated flaming. The test layout was modified to be a combination of the SBI (single burning item) test set up inside oof the ISO 9705 test room. Polyurethane was tested in two different ways with the aim of altering the ventilation condition of the tests. Test one was conducted using 1 x SBI test rig aiming for well-ventilated flaming. Test two was conducted using 2 x SBI rigs (facing each other inside the test room) (doubling the fuel loading) aiming for under-ventilated flaming. The two different configurations used were successful in achieving both well-ventilated flaming and under-ventilated flaming, shown by the measured equivalence ratios (measured using a phi meter designed and created for these experiments). The findings show that doubling the fuel loading will successfully force under-ventilated flaming conditions to be achieved. This method can therefore be used when trying to replicate post-flashover conditions in future ISO 9705 room corner tests. The radiative heat generated by the two SBI rigs facing each other facilitated a much higher overall heat release resulting in a more severe fire. The method successfully allowed for accurate measurement of smoke toxicity produced from the PU foam in terms of simple gases such as oxygen depletion, CO and CO2. Overall, the proposed test modifications improve the ability to measure the smoke toxicity of materials in different fire conditions on a large-scale.Keywords: flammability, ISO9705, large-scale testing, polyurethane, smoke toxicity
Procedia PDF Downloads 7652 Seismic Data Analysis of Intensity, Orientation and Distribution of Fractures in Basement Rocks for Reservoir Characterization
Authors: Mohit Kumar
Abstract:
Natural fractures are classified in two broad categories of joints and faults on the basis of shear movement in the deposited strata. Natural fracture always has high structural relationship with extensional or non-extensional tectonics and sometimes the result is seen in the form of micro cracks. Geological evidences suggest that both large and small-scale fractures help in to analyze the seismic anisotropy which essentially contribute into characterization of petro physical properties behavior associated with directional migration of fluid. We generally question why basement study is much needed as historically it is being treated as non-productive and geoscientist had no interest in exploration of these basement rocks. Basement rock goes under high pressure and temperature, and seems to be highly fractured because of the tectonic stresses that are applied to the formation along with the other geological factors such as depositional trend, internal stress of the rock body, rock rheology, pore fluid and capillary pressure. Sometimes carbonate rocks also plays the role of basement and igneous body e.g basalt deposited over the carbonate rocks and fluid migrate from carbonate to igneous rock due to buoyancy force and adequate permeability generated by fracturing. So in order to analyze the complete petroleum system, FMC (Fluid Migration Characterization) is necessary through fractured media including fracture intensity, orientation and distribution both in basement rock and county rock. Thus good understanding of fractures can lead to project the correct wellbore trajectory or path which passes through potential permeable zone generated through intensified P-T and tectonic stress condition. This paper deals with the analysis of these fracture property such as intensity, orientation and distribution in basement rock as large scale fracture can be interpreted on seismic section, however, small scale fractures show ambiguity in interpretation because fracture in basement rock lies below the seismic wavelength and hence shows erroneous result in identification. Seismic attribute technique also helps us to delineate the seismic fracture and subtle changes in fracture zone and these can be inferred from azimuthal anisotropy in velocity and amplitude and spectral decomposition. Seismic azimuthal anisotropy derives fracture intensity and orientation from compressional wave and converted wave data and based on variation of amplitude or velocity with azimuth. Still detailed analysis of fractured basement required full isotropic and anisotropic analysis of fracture matrix and surrounding rock matrix in order to characterize the spatial variability of basement fracture which support the migration of fluid from basement to overlying rock.Keywords: basement rock, natural fracture, reservoir characterization, seismic attribute
Procedia PDF Downloads 19751 The Impact of Glass Additives on the Functional and Microstructural Properties of Sand-Lime Bricks
Authors: Anna Stepien
Abstract:
The paper presents the results of research on modifications of sand-lime bricks, especially using glass additives (glass fiber and glass sand) and other additives (e.g.:basalt&barite aggregate, lithium silicate and microsilica) as well. The main goal of this paper is to answer the question ‘How to use glass additives in the sand-lime mass and get a better bricks?’ The article contains information on modification of sand-lime bricks using glass fiber, glass sand, microsilica (different structure of silica). It also presents the results of the conducted compression tests, which were focused on compressive strength, water absorption, bulk density, and their microstructure. The Scanning Electron Microscope, spectrum EDS, X-ray diffractometry and DTA analysis helped to define the microstructural changes of modified products. The interpretation of the products structure revealed the existence of diversified phases i.e.the C-S-H and tobermorite. CaO-SiO2-H2O system is the object of intensive research due to its meaning in chemistry and technologies of mineral binding materials. Because the blocks are the autoclaving materials, the temperature of hydrothermal treatment of the products is around 200°C, the pressure - 1,6-1,8 MPa and the time - up to 8hours (it means: 1h heating + 6h autoclaving + 1h cooling). The microstructure of the products consists mostly of hydrated calcium silicates with a different level of structural arrangement. The X-ray diffraction indicated that the type of used sand is an important factor in the manufacturing of sand-lime elements. Quartz sand of a high hardness is also a substrate hardly reacting with other possible modifiers, which may cause deterioration of certain physical and mechanical properties. TG and DTA curves show the changes in the weight loss of the sand-lime bricks specimen against time as well as the endo- and exothermic reactions that took place. The endothermic effect with the maximum at T=573°C is related to isomorphic transformation of quartz. This effect is not accompanied by a change of the specimen weight. The next endothermic effect with the maximum at T=730-760°C is related to the decomposition of the calcium carbonates. The bulk density of the brick it is 1,73kg/dm3, the presence of xonotlite in the microstructure and significant weight loss during DTA and TG tests (around 0,6% after 70 minutes) have been noticed. Silicate elements were assessed on the basis of their compressive property. Orthogonal compositional plan type 3k (with k=2), i.e.full two-factor experiment was applied in order to carry out the experiments both, in the compression strength test and bulk density test. Some modification (e.g.products with barite and basalt aggregate) have improved the compressive strength around 41.3 MPa and water absorption due to capillary raising have been limited to 12%. The next modification was adding glass fiber to sand-lime mass, then glass sand. The results show that the compressive strength was higher than in the case of traditional bricks, while modified bricks were lighter.Keywords: bricks, fiber, glass, microstructure
Procedia PDF Downloads 34750 Testing a Dose-Response Model of Intergenerational Transmission of Family Violence
Authors: Katherine Maurer
Abstract:
Background and purpose: Violence that occurs within families is a global social problem. Children who are victims or witness to family violence are at risk for many negative effects both proximally and distally. One of the most disconcerting long-term effects occurs when child victims become adult perpetrators: the intergenerational transmission of family violence (ITFV). Early identification of those children most at risk for ITFV is needed to inform interventions to prevent future family violence perpetration and victimization. Only about 25-30% of child family violence victims become perpetrators of adult family violence (either child abuse, partner abuse, or both). Prior research has primarily been conducted using dichotomous measures of exposure (yes; no) to predict ITFV, given the low incidence rate in community samples. It is often assumed that exposure to greater amounts of violence predicts greater risk of ITFV. However, no previous longitudinal study with a community sample has tested a dose-response model of exposure to physical child abuse and parental physical intimate partner violence (IPV) using count data of frequency and severity of violence to predict adult ITFV. The current study used advanced statistical methods to test if increased childhood exposure would predict greater risk of ITFV. Methods: The study utilized 3 panels of prospective data from a cohort of 15 year olds (N=338) from the Project on Human Development in Chicago Neighborhoods longitudinal study. The data were comprised of a stratified probability sample of seven ethnic/racial categories and three socio-economic status levels. Structural equation modeling was employed to test a hurdle regression model of dose-response to predict ITFV. A version of the Conflict Tactics Scale was used to measure physical violence victimization, witnessing parental IPV and young adult IPV perpetration and victimization. Results: Consistent with previous findings, past 12 months incidence rates severity and frequency of interpersonal violence were highly skewed. While rates of parental and young adult IPV were about 40%, an unusually high rate of physical child abuse (57%) was reported. The vast majority of a number of acts of violence, whether minor or severe, were in the 1-3 range in the past 12 months. Reported frequencies of more than 5 times in the past year were rare, with less than 10% of those reporting more than six acts of minor or severe physical violence. As expected, minor acts of violence were much more common than acts of severe violence. Overall, regression analyses were not significant for the dose-response model of ITFV. Conclusions and implications: The results of the dose-response model were not significant due to a lack of power in the final sample (N=338). Nonetheless, the value of the approach was confirmed for the future research given the bi-modal nature of the distributions which suggest that in the context of both child physical abuse and physical IPV, there are at least two classes when frequency of acts is considered. Taking frequency into account in predictive models may help to better understand the relationship of exposure to ITFV outcomes. Further testing using hurdle regression models is suggested.Keywords: intergenerational transmission of family violence, physical child abuse, intimate partner violence, structural equation modeling
Procedia PDF Downloads 24349 p-Type Multilayer MoS₂ Enabled by Plasma Doping for Ultraviolet Photodetectors Application
Authors: Xiao-Mei Zhang, Sian-Hong Tseng, Ming-Yen Lu
Abstract:
Two-dimensional (2D) transition metal dichalcogenides (TMDCs), such as MoS₂, have attracted considerable attention owing to the unique optical and electronic properties related to its 2D ultrathin atomic layer structure. MoS₂ is becoming prevalent in post-silicon digital electronics and in highly efficient optoelectronics due to its extremely low thickness and its tunable band gap (Eg = 1-2 eV). For low-power, high-performance complementary logic applications, both p- and n-type MoS₂ FETs (NFETs and PFETs) must be developed. NFETs with an electron accumulation channel can be obtained using unintentionally doped n-type MoS₂. However, the fabrication of MoS₂ FETs with complementary p-type characteristics is challenging due to the significant difficulty of injecting holes into its inversion channel. Plasma treatments with different species (including CF₄, SF₆, O₂, and CHF₃) have also been found to achieve the desired property modifications of MoS₂. In this work, we demonstrated a p-type multilayer MoS₂ enabled by selective-area doping using CHF₃ plasma treatment. Compared with single layer MoS₂, multilayer MoS₂ can carry a higher drive current due to its lower bandgap and multiple conduction channels. Moreover, it has three times the density of states at its minimum conduction band. Large-area growth of MoS₂ films on 300 nm thick SiO₂/Si substrate is carried out by thermal decomposition of ammonium tetrathiomolybdate, (NH₄)₂MoS₄, in a tube furnace. A two-step annealing process is conducted to synthesize MoS₂ films. For the first step, the temperature is set to 280 °C for 30 min in an N₂ rich environment at 1.8 Torr. This is done to transform (NH₄)₂MoS₄ into MoS₃. To further reduce MoS₃ into MoS₂, the second step of annealing is performed. For the second step, the temperature is set to 750 °C for 30 min in a reducing atmosphere consisting of 90% Ar and 10% H₂ at 1.8 Torr. The grown MoS₂ films are subjected to out-of-plane doping by CHF₃ plasma treatment using a Dry-etching system (ULVAC original NLD-570). The radiofrequency power of this dry-etching system is set to 100 W and the pressure is set to 7.5 mTorr. The final thickness of the treated samples is obtained by etching for 30 s. Back-gated MoS₂ PFETs were presented with an on/off current ratio in the order of 10³ and a field-effect mobility of 65.2 cm²V⁻¹s⁻¹. The MoS₂ PFETs photodetector exhibited ultraviolet (UV) photodetection capability with a rapid response time of 37 ms and exhibited modulation of the generated photocurrent by back-gate voltage. This work suggests the potential application of the mild plasma-doped p-type multilayer MoS₂ in UV photodetectors for environmental monitoring, human health monitoring, and biological analysis.Keywords: photodetection, p-type doping, multilayers, MoS₂
Procedia PDF Downloads 10448 Unpacking the Spatial Outcomes of Public Transportation in a Developing Country Context: The Case of Johannesburg
Authors: Adedayo B. Adegbaju, Carel B. Schoeman, Ilse M. Schoeman
Abstract:
The unique urban contexts that emanated from the apartheid history of South Africa informed the transport landscape of the City of Johannesburg. Apartheid‘s divisive spatial planning and land use management policies promoted sprawling and separated workers from job opportunities. This was further exacerbated by poor funding of public transport and road designs that encouraged the use of private cars. However, the democratization of the country in 1994 and the hosting of the 2010 FIFA World Cup provided a new impetus to the city’s public transport-oriented urban planning inputs. At the same time, the state’s new approach to policy formulations that entails the provision of public transport as one of the tools to end years of marginalization and inequalities soon began to largely reflect in planning decisions of other spheres of government. The Rea Vaya BRT and the Gautrain were respectively implemented by the municipal and provincial governments to demonstrate strong political will and commitment to the new policy direction. While the Gautrain was implemented to facilitate elite movement within Gauteng and to crowd investments and economic growths around station nodes, the BRT was provided for previously marginalized public transport users to provide a sustainable alternative to the dominant minibus taxi. The aim of this research is to evaluate the spatial impacts of the Gautrain and Rea Vaya BRT on the City of Johannesburg and to inform future outcomes by determining the existing potentials. By using the case study approach with a focus on the BRT and fast rail in a metropolitan context, the triangulation research method, which combines various data collection methods, was used to determine the research outcomes. The use of interviews, questionnaires, field observation, and databases such as REX, Quantec, StatsSA, GCRO observatory, national and provincial household travel surveys, and the quality of life surveys provided the basis for data collection. The research concludes that the Gautrain has demonstrated that viable alternatives to the private car can be provided, with its satisfactory feedbacks from users; while some of its station nodes (Sandton, Rosebank) have shown promises of transit-oriented development, one of the project‘s key objectives. The other stations have been unable to stimulate growth due to reasons like non-implementation of their urban design frameworks and lack of public sector investment required to attract private investors. The Rea Vaya BRT continues to be expanded in spite of both its inability to induce modal change and its low ridership figures. The research identifies factors like the low peak to base ratio, pricing, and the city‘s disjointed urban fabric as some of the reasons for its below-average performance. By drawing from the highlights and limitations, the study recommends that public transport provision should be institutionally integrated across and within spheres of government. Similarly, harmonization of the funding structure, better understanding of users’ needs, and travel patterns, underlined with continuity of policy direction and objectives, will equally promote optimal outcomes.Keywords: bus rapid transit, Gautrain, Rea Vaya, sustainable transport, spatial and transport planning, transit oriented development
Procedia PDF Downloads 11447 Upgrade of Value Chains and the Effect on Resilience of Russia’s Coal Industry and Receiving Regions on the Path of Energy Transition
Authors: Sergey Nikitenko, Vladimir Klishin, Yury Malakhov, Elena Goosen
Abstract:
Transition to renewable energy sources (solar, wind, bioenergy, etc.) and launching of alternative energy generation has weakened the role of coal as a source of energy. The Paris Agreement and assumption of obligations by many nations to orderly reduce CO₂ emissions by means of technological modernization and climate change adaptation has abridged coal demand yet more. This paper aims to assess current resilience of the coal industry to stress and to define prospects for coal production optimization using high technologies pursuant to global challenges and requirements of energy transition. Our research is based on the resilience concept adapted to the coal industry. It is proposed to divide the coal sector into segments depending on the prevailing value chains (VC). Four representative models of VC are identified in the coal sector. The most promising lines of upgrading VC in the coal industry include: •Elongation of VC owing to introduction of clean technologies of coal conversion and utilization; •Creation of parallel VC by means of waste management; •Branching of VC (conversion of a company’s VC into a production network). The upgrade effectiveness is governed in many ways by applicability of advanced coal processing technologies, usability of waste, expandability of production, entrance to non-rival markets and localization of new segments of VC in receiving regions. It is also important that upgrade of VC by means of formation of agile high-tech inter-industry production networks within the framework of operating surface and underground mines can reduce social, economic and ecological risks associated with closure of coal mines. Such promising route of VC upgrade is application of methanotrophic bacteria to produce protein to be used as feed-stuff in fish, poultry and cattle breeding, or in production of ferments, lipoids, sterols, antioxidants, pigments and polysaccharides. Closed mines can use recovered methane as a clean energy source. There exist methods of methane utilization from uncontrollable sources, including preliminary treatment and recovery of methane from air-and-methane mixture, or decomposition of methane to hydrogen and acetylene. Separated hydrogen is used in hydrogen fuel cells to generate power to feed the process of methane utilization and to supply external consumers. Despite the recent paradigm of carbon-free energy generation, it is possible to preserve the coal mining industry using the differentiated approach to upgrade of value chains based on flexible technologies with regard to specificity of mining companies.Keywords: resilience, resilience concept, resilience indicator, resilience in the Russian coal industry, value chains
Procedia PDF Downloads 10746 Fast Detection of Local Fiber Shifts by X-Ray Scattering
Authors: Peter Modregger, Özgül Öztürk
Abstract:
Glass fabric reinforced thermoplastic (GFRT) are composite materials, which combine low weight and resilient mechanical properties rendering them especially suitable for automobile construction. However, defects in the glass fabric as well as in the polymer matrix can occur during manufacturing, which may compromise component lifetime or even safety. One type of these defects is local fiber shifts, which can be difficult to detect. Recently, we have experimentally demonstrated the reliable detection of local fiber shifts by X-ray scattering based on the edge-illumination (EI) principle. EI constitutes a novel X-ray imaging technique that utilizes two slit masks, one in front of the sample and one in front of the detector, in order to simultaneously provide absorption, phase, and scattering contrast. The principle of contrast formation is as follows. The incident X-ray beam is split into smaller beamlets by the sample mask, resulting in small beamlets. These are distorted by the interaction with the sample, and the distortions are scaled up by the detector masks, rendering them visible to a pixelated detector. In the experiment, the sample mask is laterally scanned, resulting in Gaussian-like intensity distributions in each pixel. The area under the curves represents absorption, the peak offset refraction, and the width of the curve represents the scattering occurring in the sample. Here, scattering is caused by the numerous glass fiber/polymer matrix interfaces. In our recent publication, we have shown that the standard deviation of the absorption and scattering values over a selected field of view can be used to distinguish between intact samples and samples with local fiber shift defects. The quantification of defect detection performance was done by using p-values (p=0.002 for absorption and p=0.009 for scattering) and contrast-to-noise ratios (CNR=3.0 for absorption and CNR=2.1 for scattering) between the two groups of samples. This was further improved for the scattering contrast to p=0.0004 and CNR=4.2 by utilizing a harmonic decomposition analysis of the images. Thus, we concluded that local fiber shifts can be reliably detected by the X-ray scattering contrasts provided by EI. However, a potential application in, for example, production monitoring requires fast data acquisition times. For the results above, the scanning of the sample masks was performed over 50 individual steps, which resulted in long total scan times. In this paper, we will demonstrate that reliable detection of local fiber shift defects is also possible by using single images, which implies a speed up of total scan time by a factor of 50. Additional performance improvements will also be discussed, which opens the possibility for real-time acquisition. This contributes a vital step for the translation of EI to industrial applications for a wide variety of materials consisting of numerous interfaces on the micrometer scale.Keywords: defects in composites, X-ray scattering, local fiber shifts, X-ray edge Illumination
Procedia PDF Downloads 6345 Mathematical Modelling of Bacterial Growth in Products of Animal Origin in Storage and Transport: Effects of Temperature, Use of Bacteriocins and pH Level
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cordova
Abstract:
The pathogen growth in animal source foods is a common problem in the food industry, causing monetary losses due to the spoiling of products or food intoxication outbreaks in the community. In this sense, the quality of the product is reflected by the population of deteriorating agents present in it, which are mainly bacteria. The factors which are likely associated with freshness in animal source foods are temperature and processing, storage, and transport times. However, the level of deterioration of products depends, in turn, on the characteristics of the bacterial population, causing the decomposition or spoiling, such as pH level and toxins. Knowing the growth dynamics of the agents that are involved in product contamination allows the monitoring for more efficient processing. This means better quality and reasonable costs, along with a better estimation of necessary time and temperature intervals for transport and storage in order to preserve product quality. The objective of this project is to design a secondary model that allows measuring the impact on temperature bacterial growth and the competition for pH adequacy and release of bacteriocins in order to describe such phenomenon and, thus, estimate food product half-life with the least possible risk of deterioration or spoiling. In order to achieve this objective, the authors propose an analysis of a three-dimensional ordinary differential which includes; logistic bacterial growth extended by the inhibitory action of bacteriocins including the effect of the medium pH; change in the medium pH levels through an adaptation of the Luedeking-Piret kinetic model; Bacteriocin concentration modeled similarly to pH levels. These three dimensions are being influenced by the temperature at all times. Then, this differential system is expanded, taking into consideration the variable temperature and the concentration of pulsed bacteriocins, which represent characteristics inherent of the modeling, such as transport and storage, as well as the incorporation of substances that inhibit bacterial growth. The main results lead to the fact that temperature changes in an early stage of transport increased the bacterial population significantly more than if it had increased during the final stage. On the other hand, the incorporation of bacteriocins, as in other investigations, proved to be efficient in the short and medium-term since, although the population of bacteria decreased, once the bacteriocins were depleted or degraded over time, the bacteria eventually returned to their regular growth rate. The efficacy of the bacteriocins at low temperatures decreased slightly, which equates with the fact that their natural degradation rate also decreased. In summary, the implementation of the mathematical model allowed the simulation of a set of possible bacteria present in animal based products, along with their properties, in various transport and storage situations, which led us to state that for inhibiting bacterial growth, the optimum is complementary low constant temperatures and the initial use of bacteriocins.Keywords: bacterial growth, bacteriocins, mathematical modelling, temperature
Procedia PDF Downloads 13544 A Multi-Model Approach to Assess Atlantic Bonito (Sarda Sarda, Bloch 1793) in the Eastern Atlantic Ocean: A Case Study of the Senegalese Exclusive Economic Zone
Authors: Ousmane Sarr
Abstract:
The Senegalese coasts have high productivity of fishery resources due to the frequency of intense up-welling system that occurs along its coast, caused by the maritime trade winds making its waters nutrients rich. Fishing plays a primordial role in Senegal's socioeconomic plans and food security. However, a global diagnosis of the Senegalese maritime fishing sector has highlighted the challenges this sector encounters. Among these concerns, some significant stocks, a priority target for artisanal fishing, need further assessment. If no efforts are made in this direction, most stock will be overexploited or even in decline. It is in this context that this research was initiated. This investigation aimed to apply a multi-modal approach (LBB, Catch-only-based CMSY model and its most recent version (CMSY++); JABBA, and JABBA-Select) to assess the stock of Atlantic bonito, Sarda sarda (Bloch, 1793) in the Senegalese Exclusive Economic Zone (SEEZ). Available catch, effort, and size data from Atlantic bonito over 15 years (2004-2018) were used to calculate the nominal and standardized CPUE, size-frequency distribution, and length at retentions (50 % and 95 % selectivity) of the species. These relevant results were employed as input parameters for stock assessment models mentioned above to define the stock status of this species in this region of the Atlantic Ocean. The LBB model indicated an Atlantic bonito healthy stock status with B/BMSY values ranging from 1.3 to 1.6 and B/B0 values varying from 0.47 to 0.61 of the main scenarios performed (BON_AFG_CL, BON_GN_Length, and BON_PS_Length). The results estimated by LBB are consistent with those obtained by CMSY. The CMSY model results demonstrate that the SEEZ Atlantic bonito stock is in a sound condition in the final year of the main scenarios analyzed (BON, BON-bt, BON-GN-bt, and BON-PS-bt) with sustainable relative stock biomass (B2018/BMSY = 1.13 to 1.3) and fishing pressure levels (F2018/FMSY= 0.52 to 1.43). The B/BMSY and F/FMSY results for the JABBA model ranged between 2.01 to 2.14 and 0.47 to 0.33, respectively. In contrast, The estimated B/BMSY and F/FMSY for JABBA-Select ranged from 1.91 to 1.92 and 0.52 to 0.54. The Kobe plots results of the base case scenarios ranged from 75% to 89% probability in the green area, indicating sustainable fishing pressure and an Atlantic bonito healthy stock size capable of producing high yields close to the MSY. Based on the stock assessment results, this study highlighted scientific advice for temporary management measures. This study suggests an improvement of the selectivity parameters of longlines and purse seines and a temporary prohibition of the use of sleeping nets in the fishery for the Atlantic bonito stock in the SEEZ based on the results of the length-base models. Although these actions are temporary, they can be essential to reduce or avoid intense pressure on the Atlantic bonito stock in the SEEZ. However, it is necessary to establish harvest control rules to provide coherent and solid scientific information that leads to appropriate decision-making for rational and sustainable exploitation of Atlantic bonito in the SEEZ and the Eastern Atlantic Ocean.Keywords: multi-model approach, stock assessment, atlantic bonito, healthy stock, sustainable, SEEZ, temporary management measures
Procedia PDF Downloads 5843 Differentiating Third Instar Larvae of Three Species of Flies (Family: Sarcophagidae) of Potential Forensic Importance in Jamaica, Using Morphological Characteristics
Authors: Rochelle Daley, Eric Garraway, Catherine Murphy
Abstract:
Crime is a major problem in Jamaica as well as the high number of unsolved violent crimes. The introduction of forensic entomology in criminal investigations has the potential to decrease the number of unsolved violent crimes through the estimation of PMI (post-mortem interval) or time since death. Though it has great potential, forensic entomology requires data from insects specific to a geographical location to be credibly applied in legal investigations. It is a relatively new area of study in the Caribbean, with multiple pioneer research opportunities. Of critical importance in forensic entomology is the ability to identify the species of interest. Larvae are commonly collected at crime scenes and a means of rapid identification is crucial. Moreover, a low-cost method is critical in countries with limited budget available for crime fighting. Sarcophagids are one of the most important colonisers of a carcass however, they are difficult to distinguish using morphology due to their similarities, however, there is a lack of research on the larvae of this family. This research contributes to that, having identified the larvae of three species from the family Sarcophagidae: Peckia nicasia, Peckia chrysostoma and Blaesoxipha plinthopyga; important agents in flesh decomposition. Adults of Sarcophidae are also difficult to differentiate, often requiring study of the genitalia; the use of larvae in species identification is important in such cases. Adult Sarcophagids were attracted using bottle traps baited with pig liver. These adults larviposited and the larvae were collected and colonises (generation 2 and 3) reared at room temperature for morphological work (n=50). The posterior ends of the larvae from segments 9 or 10 were removed and mounted posterior end upwards to allow study using a light microscope at magnification X200 (posterior cavity and intersegmental spine bands) and X640 (anterior and posterior spiracle). The remaining sections of the larvae were cleared in 10 % KOH and the cephalopharyngeal skeleton dissected out and measured at different points. The cephalopharyngeal skeletons show observable differences in the shapes and sizes of the mouth hooks as well as the length of the ventral cornua. The most notable difference between species is in the general shape of the anal segments and the shape of the posterior spiracles. Intersegmental spine bands of these larvae become less pigmented and visible as the larvae change instars. Spine bands along with anterior spiracle are not recommended as features for species distinction. Larvae can potentially be used to distinguish Sarcophagids to the level of species, with observable differences in the anal segments and the cephalopharyngeal skeletons. However, this method of identification should be tested by comparing these morphological features with other Jamaican Sarcophagids to further support this conclusion.Keywords: 3rd instar larval morphology, forensic entomology, Jamaica, Sarcophagidae
Procedia PDF Downloads 14642 The Intensity of Root and Soil Respiration Is Significantly Determined by the Organic Matter and Moisture Content of the Soil
Authors: Zsolt Kotroczó, Katalin Juhos, Áron Béni, Gábor Várbíró, Tamás Kocsis, István Fekete
Abstract:
Soil organic matter plays an extremely important role in the functioning and regulation processes of ecosystems. It follows that the C content of organic matter in soil is one of the most important indicators of soil fertility. Part of the carbon stored in them is returned to the atmosphere during soil respiration. Climate change and inappropriate land use can accelerate these processes. Our work aimed to determine how soil CO2 emissions change over ten years as a result of organic matter manipulation treatments. With the help of this, we were able to examine not only the effects of the different organic matter intake but also the effects of the different microclimates that occur as a result of the treatments. We carried out our investigations in the area of the Síkfőkút DIRT (Detritus Input and Removal Treatment) Project. The research area is located in the southern, hilly landscape of the Bükk Mountains, northeast of Eger (Hungary). GPS coordinates of the project: 47°55′34′′ N and 20°26′ 29′′ E, altitude 320-340 m. The soil of the area is Luvisols. The 27-hectare protected forest area is now under the supervision of the Bükki National Park. The experimental plots in Síkfőkút were established in 2000. We established six litter manipulation treatments each with three 7×7 m replicate plots established under complete canopy cover. There were two types of detritus addition treatments (Double Wood and Double Litter). In three treatments, detritus inputs were removed: No Litter No Roots plots, No Inputs, and the Controls. After the establishment of the plots, during the drier periods, the NR and NI treatments showed the highest CO2 emissions. In the first few years, the effect of this process was evident, because due to the lack of living vegetation, the amount of evapotranspiration on the NR and NI plots was much lower, and transpiration practically ceased on these plots. In the wetter periods, the NL and NI treatments showed the lowest soil respiration values, which were significantly lower compared to the Co, DW, and DL treatments. Due to the lower organic matter content and the lack of surface litter cover, the water storage capacity of these soils was significantly limited, therefore we measured the lowest average moisture content among the treatments after ten years. Soil respiration is significantly influenced by temperature values. Furthermore, the supply of nutrients to the soil microorganisms is also a determining factor, which in this case is influenced by the litter production dictated by the treatments. In the case of dry soils with a moisture content of less than 20% in the initial period, litter removal treatments showed a strong correlation with soil moisture (r=0.74). In very dry soils, a smaller increase in moisture does not cause a significant increase in soil respiration, while it does in a slightly higher moisture range. In wet soils, the temperature is the main regulating factor, above a certain moisture limit, water displaces soil air from the soil pores, which inhibits aerobic decomposition processes, and so heterotrophic soil respiration also declines.Keywords: soil biology, organic matter, nutrition, DIRT, soil respiration
Procedia PDF Downloads 7541 The Touch Sensation: Ageing and Gender Influences
Authors: A. Abdouni, C. Thieulin, M. Djaghloul, R. Vargiolu, H. Zahouani
Abstract:
A decline in the main sensory modalities (vision, hearing, taste, and smell) is well reported to occur with advancing age, it is expected a similar change to occur with touch sensation and perception. In this study, we have focused on the touch sensations highlighting ageing and gender influences with in vivo systems. The touch process can be divided into two main phases: The first phase is the first contact between the finger and the object, during this contact, an adhesive force has been created which is the needed force to permit an initial movement of the finger. In the second phase, the finger mechanical properties with their surface topography play an important role in the obtained sensation. In order to understand the age and gender effects on the touch sense, we develop different ideas and systems for each phase. To better characterize the contact, the mechanical properties and the surface topography of human finger, in vivo studies on the pulp of 40 subjects (20 of each gender) of four age groups of 26±3, 35+-3, 45+-2 and 58±6 have been performed. To understand the first touch phase a classical indentation system has been adapted to measure the finger contact properties. The normal force load, the indentation speed, the contact time, the penetration depth and the indenter geometry have been optimized. The penetration depth of a glass indenter is recorded as a function of the applied normal force. Main assessed parameter is the adhesive force F_ad. For the second phase, first, an innovative approach is proposed to characterize the dynamic finger mechanical properties. A contactless indentation test inspired from the techniques used in ophthalmology has been used. The test principle is to blow an air blast to the finger and measure the caused deformation by a linear laser. The advantage of this test is the real observation of the skin free return without any outside influence. Main obtained parameters are the wave propagation speed and the Young's modulus E. Second, negative silicon replicas of subject’s fingerprint have been analyzed by a probe laser defocusing. A laser diode transmits a light beam on the surface to be measured, and the reflected signal is returned to a set of four photodiodes. This technology allows reconstructing three-dimensional images. In order to study the age and gender effects on the roughness properties, a multi-scale characterization of roughness has been realized by applying continuous wavelet transform. After determining the decomposition of the surface, the method consists of quantifying the arithmetic mean of surface topographic at each scale SMA. Significant differences of the main parameters are shown with ageing and gender. The comparison between men and women groups reveals that the adhesive force is higher for women. The results of mechanical properties show a Young’s modulus higher for women and also increasing with age. The roughness analysis shows a significant difference in function of age and gender.Keywords: ageing, finger, gender, touch
Procedia PDF Downloads 26540 Development of DNDC Modelling Method for Evaluation of Carbon Dioxide Emission from Arable Soils in European Russia
Authors: Olga Sukhoveeva
Abstract:
Carbon dioxide (CO2) is the main component of carbon biogeochemical cycle and one of the most important greenhouse gases (GHG). Agriculture, particularly arable soils, are one the largest sources of GHG emission for the atmosphere including CO2.Models may be used for estimation of GHG emission from agriculture if they can be adapted for different countries conditions. The only model used in officially at national level in United Kingdom and China for this purpose is DNDC (DeNitrification-DeComposition). In our research, the model DNDC is offered for estimation of GHG emission from arable soils in Russia. The aim of our research was to create the method of DNDC using for evaluation of CO2 emission in Russia based on official statistical information. The target territory was European part of Russia where many field experiments are located. At the first step of research the database on climate, soil and cropping characteristics for the target region from governmental, statistical, and literature sources were created. All-Russia Research Institute of Hydrometeorological Information – World Data Centre provides open daily data about average meteorological and climatic conditions. It must be calculated spatial average values of maximum and minimum air temperature and precipitation over the region. Spatial average values of soil characteristics (soil texture, bulk density, pH, soil organic carbon content) can be determined on the base of Union state register of soil recourses of Russia. Cropping technologies are published by agricultural research institutes and departments. We offer to define cropping system parameters (annual information about crop yields, amount and types of fertilizers and manure) on the base of the Federal State Statistics Service data. Content of carbon in plant biomass may be calculated via formulas developed and published by Ministry of Natural Resources and Environment of the Russian Federation. At the second step CO2 emission from soil in this region were calculated by DNDC. Modelling data were compared with empirical and literature data and good results were obtained, modelled values were equivalent to the measured ones. It was revealed that the DNDC model may be used to evaluate and forecast the CO2 emission from arable soils in Russia based on the official statistical information. Also, it can be used for creation of the program for decreasing GHG emission from arable soils to the atmosphere. Financial Support: fundamental scientific researching theme 0148-2014-0005 No 01201352499 ‘Solution of fundamental problems of analysis and forecast of Earth climatic system condition’ for 2014-2020; fundamental research program of Presidium of RAS No 51 ‘Climate change: causes, risks, consequences, problems of adaptation and regulation’ for 2018-2020.Keywords: arable soils, carbon dioxide emission, DNDC model, European Russia
Procedia PDF Downloads 19139 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete
Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml
Abstract:
Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic
Procedia PDF Downloads 15738 Soybean Oil Based Phase Change Material for Thermal Energy Storage
Authors: Emre Basturk, Memet Vezir Kahraman
Abstract:
In many developing countries, with the rapid economic improvements, energy shortage and environmental issues have become a serious problem. Therefore, it has become a very critical issue to improve energy usage efficiency and also protect the environment. Thermal energy storage system is an essential approach to match the thermal energy claim and supply. Thermal energy can be stored by heating, cooling or melting a material with the energy and then enhancing accessible when the procedure is reversed. The overall thermal energy storage techniques are sorted as; latent heat or sensible heat thermal energy storage technology segments. Among these methods, latent heat storage is the most effective method of collecting thermal energy. Latent heat thermal energy storage depend on the storage material, emitting or discharging heat as it undergoes a solid to liquid, solid to solid or liquid to gas phase change or vice versa. Phase change materials (PCMs) are promising materials for latent heat storage applications due to their capacities to accumulate high latent heat storage per unit volume by phase change at an almost constant temperature. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. Organic PCMs are rather expensive and they have average latent heat storage per unit volume and also have low density. Most organic PCMs are combustible in nature and also have a wide range of melting point. Organic PCMs can be categorized into two major categories: non-paraffinic and paraffin materials. Paraffin materials have been extensively used, due to their high latent heat and right thermal characteristics, such as minimal super cooling, varying phase change temperature, low vapor pressure while melting, good chemical and thermal stability, and self-nucleating behavior. Ultraviolet (UV)-curing technology has been generally used because it has many advantages, such as low energy consumption , high speed, high chemical stability, room-temperature operation, low processing costs and environmental friendly. For many years, PCMs have been used for heating and cooling industrial applications including textiles, refrigerators, construction, transportation packaging for temperature-sensitive products, a few solar energy based systems, biomedical and electronic materials. In this study, UV-curable, fatty alcohol containing soybean oil based phase change materials (PCMs) were obtained and characterized. The phase transition behaviors and thermal stability of the prepared UV-cured biobased PCMs were analyzed by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). The heating process phase change enthalpy is measured between 30 and 68 J/g, and the freezing process phase change enthalpy is found between 18 and 70 J/g. The decomposition of UVcured PCMs started at 260 ºC and reached a maximum of 430 ºC.Keywords: fatty alcohol, phase change material, thermal energy storage, UV curing
Procedia PDF Downloads 38237 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization
Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon
Abstract:
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization
Procedia PDF Downloads 44536 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts
Authors: William Michael Short
Abstract:
Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics
Procedia PDF Downloads 13235 Discovering Causal Structure from Observations: The Relationships between Technophile Attitude, Users Value and Use Intention of Mobility Management Travel App
Authors: Aliasghar Mehdizadeh Dastjerdi, Francisco Camara Pereira
Abstract:
The increasing complexity and demand of transport services strains transportation systems especially in urban areas with limited possibilities for building new infrastructure. The solution to this challenge requires changes of travel behavior. One of the proposed means to induce such change is multimodal travel apps. This paper describes a study of the intention to use a real-time multi-modal travel app aimed at motivating travel behavior change in the Greater Copenhagen Region (Denmark) toward promoting sustainable transport options. The proposed app is a multi-faceted smartphone app including both travel information and persuasive strategies such as health and environmental feedback, tailoring travel options, self-monitoring, tunneling users toward green behavior, social networking, nudging and gamification elements. The prospective for mobility management travel apps to stimulate sustainable mobility rests not only on the original and proper employment of the behavior change strategies, but also on explicitly anchoring it on established theoretical constructs from behavioral theories. The theoretical foundation is important because it positively and significantly influences the effectiveness of the system. However, there is a gap in current knowledge regarding the study of mobility-management travel app with support in behavioral theories, which should be explored further. This study addresses this gap by a social cognitive theory‐based examination. However, compare to conventional method in technology adoption research, this study adopts a reverse approach in which the associations between theoretical constructs are explored by Max-Min Hill-Climbing (MMHC) algorithm as a hybrid causal discovery method. A technology-use preference survey was designed to collect data. The survey elicited different groups of variables including (1) three groups of user’s motives for using the app including gain motives (e.g., saving travel time and cost), hedonic motives (e.g., enjoyment) and normative motives (e.g., less travel-related CO2 production), (2) technology-related self-concepts (i.e. technophile attitude) and (3) use Intention of the travel app. The questionnaire items led to the formulation of causal relationships discovery to learn the causal structure of the data. Causal relationships discovery from observational data is a critical challenge and it has applications in different research fields. The estimated causal structure shows that the two constructs of gain motives and technophilia have a causal effect on adoption intention. Likewise, there is a causal relationship from technophilia to both gain and hedonic motives. In line with the findings of the prior studies, it highlights the importance of functional value of the travel app as well as technology self-concept as two important variables for adoption intention. Furthermore, the results indicate the effect of technophile attitude on developing gain and hedonic motives. The causal structure shows hierarchical associations between the three groups of user’s motive. They can be explained by “frustration-regression” principle according to Alderfer's ERG (Existence, Relatedness and Growth) theory of needs meaning that a higher level need remains unfulfilled, a person may regress to lower level needs that appear easier to satisfy. To conclude, this study shows the capability of causal discovery methods to learn the causal structure of theoretical model, and accordingly interpret established associations.Keywords: travel app, behavior change, persuasive technology, travel information, causality
Procedia PDF Downloads 14134 Sugarcane Trash Biochar: Effect of the Temperature in the Porosity
Authors: Gabriela T. Nakashima, Elias R. D. Padilla, Joao L. Barros, Gabriela B. Belini, Hiroyuki Yamamoto, Fabio M. Yamaji
Abstract:
Biochar can be an alternative to use sugarcane trash. Biochar is a solid material obtained from pyrolysis, that is a biomass thermal degradation with low or no O₂ concentration. Pyrolysis transforms the carbon that is commonly found in other organic structures into a carbon with more stability that can resist microbial decomposition. Biochar has a versatility of uses such as soil fertility, carbon sequestration, energy generation, ecological restoration, and soil remediation. Biochar has a great ability to retain water and nutrients in the soil so that this material can improve the efficiency of irrigation and fertilization. The aim of this study was to characterize biochar produced from sugarcane trash in three different pyrolysis temperatures and determine the lowest temperature with the high yield and carbon content. Physical characterization of this biochar was performed to help the evaluation for the best production conditions. Sugarcane (Saccharum officinarum) trash was collected at Corredeira Farm, located in Ibaté, São Paulo State, Brazil. The farm has 800 hectares of planted area with an average yield of 87 t·ha⁻¹. The sugarcane varieties planted on the farm are: RB 855453, RB 867515, RB 855536, SP 803280, SP 813250. Sugarcane trash was dried and crushed into 50 mm pieces. Crucibles and lids were used to settle the sugarcane trash samples. The higher amount of sugarcane trash was added to the crucible to avoid the O₂ concentration. Biochar production was performed in three different pyrolysis temperatures (200°C, 325°C, 450°C) in 2 hours residence time in the muffle furnace. Gravimetric yield of biochar was obtained. Proximate analysis of biochar was done using ASTM E-872 and ABNT NBR 8112. Volatile matter and ash content were calculated by direct weight loss and fixed carbon content calculated by difference. Porosity measurement was evaluated using an automatic gas adsorption device, Autosorb-1, with CO₂ described by Nakatani. Approximately 0.5 g of biochar in 2 mm particle sizes were used for each measurement. Vacuum outgassing was performed as a pre-treatment in different conditions for each biochar temperature. The pore size distribution of micropores was determined using Horváth-Kawazoe method. Biochar presented different colors for each treatment. Biochar - 200°C presented a higher number of pieces with 10mm or more and did not present the dark black color like other treatments after 2 h residence time in muffle furnace. Also, this treatment had the higher content of volatiles and the lower amount of fixed carbon. In porosity analysis, while the temperature treatments increase, the amount of pores also increase. The increase in temperature resulted in a biochar with a better quality. The pores in biochar can help in the soil aeration, adsorption, water retention. Acknowledgment: This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brazil – PROAP-CAPES, PDSE and CAPES - Finance Code 001.Keywords: proximate analysis, pyrolysis, soil amendment, sugarcane straw
Procedia PDF Downloads 214