Search results for: federated averaging
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 117

Search results for: federated averaging

27 Investigation of Existing Guidelines for Four-Legged Angular Telecommunication Tower

Authors: Sankara Ganesh Dhoopam, Phaneendra Aduri

Abstract:

Lattice towers are light weight structures which are primarily governed by the effects of wind loading. Ensuring a precise assessment of wind loads on the tower structure, antennas, and associated equipment is vital for the safety and efficiency of tower design. Earlier, the Indian standards are not available for design of telecom towers. Instead, the industry conventionally relied on the general building wind loading standard for calculating loads on tower components and the transmission line tower design standard for designing the angular members of the towers. Subsequently, the Bureau of Indian Standards (BIS) revised these standards and angular member design standard. While the transmission line towers are designed using the above standard, a full-scale model test will be done to prove the design. Telecom angular towers are also designed using the same with overload factor/factor of safety without full scale tower model testing. General construction in steel design code is available with limit state design approach and is applicable to the design of general structures involving angles and tubes but not used for angle member design of towers. Recently, in response to the evolving industry needs, the Bureau of Indian Standards (BIS) introduced a new standard titled “Isolated Towers, Masts, and Poles using structural steel -Code of practice” for the design of telecom towers. This study focuses on a 40m four legged angular tower to compare loading calculations and member designs between old and new standards. Additionally, a comparative analysis aligning with the new code provisions with international loading and design standards with a specific focus on American standards has been carried out. This paper elaborates code-based provisions used for load and member design calculations, including the influence of "ka" area averaging factor introduced in new wind load case.

Keywords: telecom, angular tower, PLS tower, GSM antenna, microwave antenna, IS 875(Part-3):2015, IS 802(Part-1/sec-2):2016, IS 800:2007, IS 17740:2022, ANSI/TIA-222G, ANSI/TIA-222H.

Procedia PDF Downloads 49
26 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 372
25 Energy Content and Spectral Energy Representation of Wave Propagation in a Granular Chain

Authors: Rohit Shrivastava, Stefan Luding

Abstract:

A mechanical wave is propagation of vibration with transfer of energy and momentum. Studying the energy as well as spectral energy characteristics of a propagating wave through disordered granular media can assist in understanding the overall properties of wave propagation through inhomogeneous materials like soil. The study of these properties is aimed at modeling wave propagation for oil, mineral or gas exploration (seismic prospecting) or non-destructive testing for the study of internal structure of solids. The study of Energy content (Kinetic, Potential and Total Energy) of a pulse propagating through an idealized one-dimensional discrete particle system like a mass disordered granular chain can assist in understanding the energy attenuation due to disorder as a function of propagation distance. The spectral analysis of the energy signal can assist in understanding dispersion as well as attenuation due to scattering in different frequencies (scattering attenuation). The selection of one-dimensional granular chain also helps in studying only the P-wave attributes of the wave and removing the influence of shear or rotational waves. Granular chains with different mass distributions have been studied, by randomly selecting masses from normal, binary and uniform distributions and the standard deviation of the distribution is considered as the disorder parameter, higher standard deviation means higher disorder and lower standard deviation means lower disorder. For obtaining macroscopic/continuum properties, ensemble averaging has been used. Interpreting information from a Total Energy signal turned out to be much easier in comparison to displacement, velocity or acceleration signals of the wave, hence, indicating a better analysis method for wave propagation through granular materials. Increasing disorder leads to faster attenuation of the signal and decreases the Energy of higher frequency signals transmitted, but at the same time the energy of spatially localized high frequencies also increases. An ordered granular chain exhibits ballistic propagation of energy whereas, a disordered granular chain exhibits diffusive like propagation, which eventually becomes localized at long periods of time.

Keywords: discrete elements, energy attenuation, mass disorder, granular chain, spectral energy, wave propagation

Procedia PDF Downloads 261
24 Last ca 2500 Yr History of the Harmful Algal Blooms in South China Reconstructed on Organic-Walled Dinoflagellate Cysts

Authors: Anastasia Poliakova

Abstract:

Harmful algal bloom (HAB) is a known negative phenomenon that is caused both by natural factors and anthropogenic influence. HABs can result in a series of deleterious effects, such as beach fouling, paralytic shellfish poisoning, mass mortality of marine species, and a threat to human health, especially if toxins pollute drinking water or occur nearby public resorts. In South China, the problem of HABs has an ultimately important meaning. For this study, we used a 1.5 m sediment core LAX-2018-2 collected in 2018 from the Zhanjiang Mangrove National Nature Reserve (109°03´E, 20°30´N), Guangdong Province, South China. High-resolution coastal environment reconstruction with a specific focus on the HABs history during the last ca 2500 yrs was attempted. Age control was performed with five radiocarbon dates obtained from benthic foraminifera. A total number of 71 dinoflagellate cyst types was recorded. The most common types found consistently throughout the sediment sequence were autotrophic Spiniferites spp., Spiniferites hyperacanthus and S. mirabilis, S. ramosus, Operculodinium centrocarpum sensu Wall and Dale 1966, Polysphaeridium zoharyi, and heterotrophic Brigantedinium ssp., cyst of Gymnodinium catenatum and cysts mixture of Protoperidinium. Three local dinoflagellate zones LAX-1 to LAX-3 were established based on the results of the constrained cluster analysis and data ordination; additionally, the middle zone LAX-2 was derived into two subzones, LAX-2a and LAX-2b based on the dynamics of toxic and heterotrophic cysts as well as on the significant changes (probability, P=0.89) in percentages of eutrophic indicators. The total cyst count varied from 106 to 410 dinocysts per slide, with 177 cyst types on average. Dinocyst assemblages are characterized by high values of the dost-depositional degradation index (kt) that varies between 3.6 and 7.6 (averaging 5.4), which is relatively high and is very typical for the areas with selective dinoflagellate cyst preservation that is related to bottom-water oxygen concentrations.

Keywords: reconstruction of palaeoenvironment, harmful algal blooms, anthropogenic influence on coastal zones, South China Sea

Procedia PDF Downloads 58
23 Tick Induced Facial Nerve Paresis: A Narrative Review

Authors: Jemma Porrett

Abstract:

Background: We present a literature review examining the research surrounding tick paralysis resulting in facial nerve palsy. A case of an intra-aural paralysis tick bite resulting in unilateral facial nerve palsy is also discussed. Methods: A novel case of otoacariasis with associated ipsilateral facial nerve involvement is presented. Additionally, we conducted a review of the literature, and we searched the MEDLINE and EMBASE databases for relevant literature published between 1915 and 2020. Utilising the following keywords; 'Ixodes', 'Facial paralysis', 'Tick bite', and 'Australia', 18 articles were deemed relevant to this study. Results: Eighteen articles included in the review comprised a total of 48 patients. Patients' ages ranged from one year to 84 years of age. Ten studies estimated the possible duration between a tick bite and facial nerve palsy, averaging 8.9 days. Forty-one patients presented with a single tick within the external auditory canal, three had a single tick located on the temple or forehead region, three had post-auricular ticks, and one patient had a remarkable 44 ticks removed from the face, scalp, neck, back, and limbs. A complete ipsilateral facial nerve palsy was present in 45 patients, notably, in 16 patients, this occurred following tick removal. House-Brackmann classification was utilised in 7 patients; four patients with grade 4, one patient with grade three, and two patients with grade 2 facial nerve palsy. Thirty-eight patients had complete recovery of facial palsy. Thirteen studies were analysed for time to recovery, with an average time of 19 days. Six patients had partial recovery at the time of follow-up. One article reported improvement in facial nerve palsy at 24 hours, but no further follow-up was reported. One patient was lost to follow up, and one article failed to mention any resolution of facial nerve palsy. One patient died from respiratory arrest following generalized paralysis. Conclusions: Tick paralysis is a severe but preventable disease. Careful examination of the face, scalp, and external auditory canal should be conducted in patients presenting with otalgia and facial nerve palsy, particularly in tropical areas, to exclude the possibility of tick infestation.

Keywords: facial nerve palsy, tick bite, intra-aural, Australia

Procedia PDF Downloads 79
22 Estimating Groundwater Seepage Rates: Case Study at Zegveld, Netherlands

Authors: Wondmyibza Tsegaye Bayou, Johannes C. Nonner, Joost Heijkers

Abstract:

This study aimed to identify and estimate dynamic groundwater seepage rates using four comparative methods; the Darcian approach, the water balance approach, the tracer method, and modeling. The theoretical background to these methods is put together in this study. The methodology was applied to a case study area at Zegveld following the advice of the Water Board Stichtse Rijnlanden. Data collection has been from various offices and a field campaign in the winter of 2008/09. In this complex confining layer of the study area, the location of the phreatic groundwater table is at a shallow depth compared to the piezometric water level. Data were available for the model years 1989 to 2000 and winter 2008/09. The higher groundwater table shows predominately-downward seepage in the study area. Results of the study indicated that net recharge to the groundwater table (precipitation excess) and the ditch system are the principal sources for seepage across the complex confining layer. Especially in the summer season, the contribution from the ditches is significant. Water is supplied from River Meije through a pumping system to meet the ditches' water demand. The groundwater seepage rate was distributed unevenly throughout the study area at the nature reserve averaging 0.60 mm/day for the model years 1989 to 2000 and 0.70 mm/day for winter 2008/09. Due to data restrictions, the seepage rates were mainly determined based on the Darcian method. Furthermore, the water balance approach and the tracer methods are applied to compute the flow exchange within the ditch system. The site had various validated groundwater levels and vertical flow resistance data sources. The phreatic groundwater level map compared with TNO-DINO groundwater level data values overestimated the groundwater level depth by 28 cm. The hydraulic resistance values obtained based on the 3D geological map compared with the TNO-DINO data agreed with the model values before calibration. On the other hand, the calibrated model significantly underestimated the downward seepage in the area compared with the field-based computations following the Darcian approach.

Keywords: groundwater seepage, phreatic water table, piezometric water level, nature reserve, Zegveld, The Netherlands

Procedia PDF Downloads 59
21 Tip-Apex Distance as a Long-Term Risk Factor for Hospital Readmission Following Intramedullary Fixation of Intertrochanteric Fractures

Authors: Brandon Knopp, Matthew Harris

Abstract:

Purpose: Tip-apex distance (TAD) has long been discussed as a metric for determining risk of failure in the fixation of peritrochanteric fractures. TAD measurements over 25 millimeters (mm) have been associated with higher rates of screw cut out and other complications in the first several months after surgery. However, there is limited evidence for the efficacy of this measurement in predicting the long-term risk of negative outcomes following hip fixation surgery. The purpose of our study was to investigate risk factors including TAD for hospital readmission, loss of pre-injury ambulation and development of complications within 1 year after hip fixation surgery. Methods: A retrospective review of proximal hip fractures treated with single screw intramedullary devices between 2016 and 2020 was performed at a 327-bed regional medical center. Patients included had a postoperative follow-up of at least 12 months or surgery-related complications developing within that time. Results: 44 of the 67 patients in this study met the inclusion criteria with adequate follow-up post-surgery. There was a total of 10 males (22.7%) and 34 females (77.3%) meeting inclusion criteria with a mean age of 82.1 (± 12.3) at the time of surgery. The average TAD in our study population was 19.57mm and the average 1-year readmission rate was 15.9%. 3 out of 6 patients (50%) with a TAD > 25mm were readmitted within one year due to surgery-related complications. In contrast, 3 out of 38 patients (7.9%) with a TAD < 25mm were readmitted within one year due to surgery-related complications (p=0.0254). Individual TAD measurements, averaging 22.05mm in patients readmitted within 1 year of surgery and 19.18mm in patients not readmitted within 1 year of surgery, were not significantly different between the two groups (p=0.2113). Conclusions: Our data indicate a significant improvement in hospital readmission rates up to one year after hip fixation surgery in patients with a TAD < 25mm with a decrease in readmissions of over 40% (50% vs 7.9%). This result builds upon past investigations by extending the follow-up time to 1 year after surgery and utilizing hospital readmissions as a metric for surgical success. With the well-documented physical and financial costs of hospital readmission after hip surgery, our study highlights a reduction of TAD < 25mm as an effective method of improving patient outcomes and reducing financial costs to patients and medical institutions. No relationship was found between TAD measurements and secondary outcomes, including loss of pre-injury ambulation and development of complications.

Keywords: hip fractures, hip reductions, readmission rates, open reduction internal fixation

Procedia PDF Downloads 125
20 Altering Surface Properties of Magnetic Nanoparticles with Single-Step Surface Modification with Various Surface Active Agents

Authors: Krupali Mehta, Sandip Bhatt, Umesh Trivedi, Bhavesh Bharatiya, Mukesh Ranjan, Atindra D. Shukla

Abstract:

Owing to the dominating surface forces and large-scale surface interactions, the nano-scale particles face difficulties in getting suspended in various media. Magnetic nanoparticles of iron oxide offer a great deal of promise due to their ease of preparation, reasonable magnetic properties, low cost and environmental compatibility. We intend to modify the surface of magnetic Fe₂O₃ nanoparticles with selected surface modifying agents using simple and effective single-step chemical reactions in order to enhance dispersibility of magnetic nanoparticles in non-polar media. Magnetic particles were prepared by hydrolysis of Fe²⁺/Fe³⁺ chlorides and their subsequent oxidation in aqueous medium. The dried particles were then treated with Octadecyl quaternary ammonium silane (Terrasil™), stearic acid and gallic acid ester of stearyl alcohol in ethanol separately to yield S-2 to S-4 respectively. The untreated Fe₂O₃ was designated as S-1. The surface modified nanoparticles were then analysed with Dynamic Light Scattering (DLS), Fourier Transform Infrared spectroscopy (FTIR), X-Ray Diffraction (XRD), Thermogravimetric Gravimetric Analysis (TGA) and Scanning Electron Microscopy and Energy dispersive X-Ray analysis (SEM-EDAX). Characterization reveals the particle size averaging 20-50 nm with and without modification. However, the crystallite size in all cases remained ~7.0 nm with the diffractogram matching to Fe₂O₃ crystal structure. FT-IR suggested the presence of surfactants on nanoparticles’ surface, also confirmed by SEM-EDAX where mapping of elements proved their presence. TGA indicated the weight losses in S-2 to S-4 at 300°C onwards suggesting the presence of organic moiety. Hydrophobic character of modified surfaces was confirmed with contact angle analysis, all modified nanoparticles showed super hydrophobic behaviour with average contact angles ~129° for S-2, ~139.5° for S-3 and ~151° for S-4. This indicated that surface modified particles are super hydrophobic and they are easily dispersible in non-polar media. These modified particles could be ideal candidates to be suspended in oil-based fluids, polymer matrices, etc. We are pursuing elaborate suspension/sedimentation studies of these particles in various oils to establish this conjecture.

Keywords: iron nanoparticles, modification, hydrophobic, dispersion

Procedia PDF Downloads 120
19 An Exploratory Study on the Impact of Climate Change on Design Rainfalls in the State of Qatar

Authors: Abdullah Al Mamoon, Niels E. Joergensen, Ataur Rahman, Hassan Qasem

Abstract:

Intergovernmental Panel for Climate Change (IPCC) in its fourth Assessment Report AR4 predicts a more extreme climate towards the end of the century, which is likely to impact the design of engineering infrastructure projects with a long design life. A recent study in 2013 developed new design rainfall for Qatar, which provides an improved design basis of drainage infrastructure for the State of Qatar under the current climate. The current design standards in Qatar do not consider increased rainfall intensity caused by climate change. The focus of this paper is to update recently developed design rainfalls in Qatar under the changing climatic conditions based on IPCC's AR4 allowing a later revision to the proposed design standards, relevant for projects with a longer design life. The future climate has been investigated based on the climate models released by IPCC’s AR4 and A2 story line of emission scenarios (SRES) using a stationary approach. Annual maximum series (AMS) of predicted 24 hours rainfall data for both wet (NCAR-CCSM) scenario and dry (CSIRO-MK3.5) scenario for the Qatari grid points in the climate models have been extracted for three periods, current climate 2010-2039, medium term climate (2040-2069) and end of century climate (2070-2099). A homogeneous region of the Qatari grid points has been formed and L-Moments based regional frequency approach is adopted to derive design rainfalls. The results indicate no significant changes in the design rainfall on the short term 2040-2069, but significant changes are expected towards the end of the century (2070-2099). New design rainfalls have been developed taking into account climate change for 2070-2099 scenario and by averaging results from the two scenarios. IPCC’s AR4 predicts that the rainfall intensity for a 5-year return period rain with duration of 1 to 2 hours will increase by 11% in 2070-2099 compared to current climate. Similarly, the rainfall intensity for more extreme rainfall, with a return period of 100 years and duration of 1 to 2 hours will increase by 71% in 2070-2099 compared to current climate. Infrastructure with a design life exceeding 60 years should add safety factors taking the predicted effects from climate change into due consideration.

Keywords: climate change, design rainfalls, IDF, Qatar

Procedia PDF Downloads 368
18 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study

Authors: Anna Ching-Yu Wong

Abstract:

Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.

Keywords: textbook savings, open textbooks, textbook costs assessment, open access

Procedia PDF Downloads 45
17 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow

Authors: Alex Fedoseyev

Abstract:

This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.

Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow

Procedia PDF Downloads 27
16 Hypersonic Propulsion Requirements for Sustained Hypersonic Flight for Air Transportation

Authors: James Rate, Apostolos Pesiridis

Abstract:

In this paper, the propulsion requirements required to achieve sustained hypersonic flight for commercial air transportation are evaluated. In addition, a design methodology is developed and used to determine the propulsive capabilities of both ramjet and scramjet engines. Twelve configurations are proposed for hypersonic flight using varying combinations of turbojet, turbofan, ramjet and scramjet engines. The optimal configuration was determined based on how well each of the configurations met the projected requirements for hypersonic commercial transport. The configurations were separated into four sub-configurations each comprising of three unique derivations. The first sub-configuration comprised four afterburning turbojets and either one or two ramjets idealised for Mach 5 cruise. The number of ramjets required was dependent on the thrust required to accelerate the vehicle from a speed where the turbojets cut out to Mach 5 cruise. The second comprised four afterburning turbojets and either one or two scramjets, similar to the first configuration. The third used four turbojets, one scramjet and one ramjet to aid acceleration from Mach 3 to Mach 5. The fourth configuration was the same as the third, but instead of turbojets, it implemented turbofan engines for the preliminary acceleration of the vehicle. From calculations which determined the fuel consumption at incremental Mach numbers this paper found that the ideal solution would require four turbojet engines and two Scramjet engines. The ideal mission profile was determined as being an 8000km sortie based on an averaging of popular long haul flights with strong business ties, which included Los Angeles to Tokyo, London to New York and Dubai to Beijing. This paper deemed that these routes would benefit from hypersonic transport links based on the previously mentioned factors. This paper has found that this configuration would be sufficient for the 8000km flight to be completed in approximately two and a half hours and would consume less fuel than Concord in doing so. However, this propulsion configuration still result in a greater fuel cost than a conventional passenger. In this regard, this investigation contributes towards the specification of the engine requirements throughout a mission profile for a hypersonic passenger vehicle. A number of assumptions have had to be made for this theoretical approach but the authors believe that this investigation lays the groundwork for appropriate framing of the propulsion requirements for sustained hypersonic flight for commercial air transportation. Despite this, it does serve as a crucial step in the development of the propulsion systems required for hypersonic commercial air transportation. This paper provides a methodology and a focus for the development of the propulsion systems that would be required for sustained hypersonic flight for commercial air transportation.

Keywords: hypersonic, ramjet, propulsion, Scramjet, Turbojet, turbofan

Procedia PDF Downloads 290
15 Variation of Carbon Isotope Ratio (δ13C) and Leaf-Productivity Traits in Aquilaria Species (Thymelaeceae)

Authors: Arlene López-Sampson, Tony Page, Betsy Jackes

Abstract:

Aquilaria genus produces a highly valuable fragrant oleoresin known as agarwood. Agarwood forms in a few trees in the wild as a response to injure or pathogen attack. The resin is used in perfume and incense industry and medicine. Cultivation of Aquilaria species as a sustainable source of the resin is now a common strategy. Physiological traits are frequently used as a proxy of crop and tree productivity. Aquilaria species growing in Queensland, Australia were studied to investigate relationship between leaf-productivity traits with tree growth. Specifically, 28 trees, representing 12 plus trees and 16 trees from yield plots, were selected to conduct carbon isotope analysis (δ13C) and monitor six leaf attributes. Trees were grouped on four diametric classes (diameter at 150 mm above ground level) ensuring the variability in growth of the whole population was sampled. Model averaging technique based on the Akaike’s information criterion (AIC) was computed to identify whether leaf traits could assist in diameter prediction. Carbon isotope values were correlated with height classes and leaf traits to determine any relationship. In average four leaves per shoot were recorded. Approximately one new leaf per week is produced by a shoot. Rate of leaf expansion was estimated in 1.45 mm day-1. There were no statistical differences between diametric classes and leaf expansion rate and number of new leaves per week (p > 0.05). Range of δ13C values in leaves of Aquilaria species was from -25.5 ‰ to -31 ‰ with an average of -28.4 ‰ (± 1.5 ‰). Only 39% of the variability in height can be explained by δ13C in leaf. Leaf δ13C and nitrogen content values were positively correlated. This relationship implies that leaves with higher photosynthetic capacities also had lower intercellular carbon dioxide concentrations (ci/ca) and less depleted values of 13C. Most of the predictor variables have a weak correlation with diameter (D). However, analysis of the 95% confidence of best-ranked regression models indicated that the predictors that could likely explain growth in Aquilaria species are petiole length (PeLen), values of δ13C (true13C) and δ15N (true15N), leaf area (LA), specific leaf area (SLA) and number of new leaf produced per week (NL.week). The model constructed with PeLen, true13C, true15N, LA, SLA and NL.week could explain 45% (R2 0.4573) of the variability in D. The leaf traits studied gave a better understanding of the leaf attributes that could assist in the selection of high-productivity trees in Aquilaria.

Keywords: 13C, petiole length, specific leaf area, tree growth

Procedia PDF Downloads 470
14 Criticality of Adiabatic Length for a Single Branch Pulsating Heat Pipe

Authors: Utsav Bhardwaj, Shyama Prasad Das

Abstract:

To meet the extensive requirements of thermal management of the circuit card assemblies (CCAs), satellites, PCBs, microprocessors, any other electronic circuitry, pulsating heat pipes (PHPs) have emerged in the recent past as one of the best solutions technically. But industrial application of PHPs is still unexplored up to a large extent due to their poor reliability. There are several systems as well as operational parameters which not only affect the performance of an operating PHP, but also decide whether the PHP can operate sustainably or not. Functioning may completely be halted for some particular combinations of the values of system and operational parameters. Among the system parameters, adiabatic length is one of the important ones. In the present work, a simplest single branch PHP system with an adiabatic section has been considered. It is assumed to have only one vapour bubble and one liquid plug. First, the system has been mathematically modeled using film evaporation/condensation model, followed by the steps of recognition of equilibrium zone, non-dimensionalization and linearization. Then proceeding with a periodical solution of the linearized and reduced differential equations, stability analysis has been performed. Slow and fast variables have been identified, and averaging approach has been used for the slow ones. Ultimately, temporal evolution of the PHP is predicted by numerically solving the averaged equations, to know whether the oscillations are likely to sustain/decay temporally. Stability threshold has also been determined in terms of some non-dimensional numbers formed by different groupings of system and operational parameters. A combined analytical and numerical approach has been used, and it has been found that for each combination of all other parameters, there exists a maximum length of the adiabatic section beyond which the PHP cannot function at all. This length has been called as “Critical Adiabatic Length (L_ac)”. For adiabatic lengths greater than “L_ac”, oscillations are found to be always decaying sooner or later. Dependence of “L_ac” on some other parameters has also been checked and correlated at certain evaporator & condenser section temperatures. “L_ac” has been found to be linearly increasing with increase in evaporator section length (L_e), whereas the condenser section length (L_c) has been found to have almost no effect on it upto a certain limit. But at considerably large condenser section lengths, “L_ac” is expected to decrease with increase in “L_c” due to increased wall friction. Rise in static pressure (p_r) exerted by the working fluid reservoir makes “L_ac” rise exponentially whereas it increases cubically with increase in the inner diameter (d) of PHP. Physics of all such variations has been given a good insight too. Thus, a methodology for quantification of the critical adiabatic length for any possible set of all other parameters of PHP has been established.

Keywords: critical adiabatic length, evaporation/condensation, pulsating heat pipe (PHP), thermal management

Procedia PDF Downloads 194
13 Forecasting Residential Water Consumption in Hamilton, New Zealand

Authors: Farnaz Farhangi

Abstract:

Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.

Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model

Procedia PDF Downloads 297
12 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 132
11 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 286
10 Technology Management for Early Stage Technologies

Authors: Ming Zhou, Taeho Park

Abstract:

Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.

Keywords: technology management, early stage technology, patent, decision

Procedia PDF Downloads 312
9 Effect of Different By-Products on Growth Performance, Carcass Characteristics and Serum Parameters of Growing Simmental Crossbred Cattle

Authors: Fei Wang, Jie Meng, Qingxiang Meng

Abstract:

China is rich in straw and by-product resources, whose utilization has always been a hot topic. The objective of this study was to investigate the effect of feeding soybean straw and wine distiller’s grain as a replacement for corn stover on performance of beef cattle. Sixty Simmental×local crossbred bulls averaging 12 months old and 335.7 ± 39.1 kg of body weight (BW) were randomly assigned into four groups (15 animals per group) and allocated to a diet with 40% maize stover (MSD), a diet with 40% wrapping package maize silage (PMSD), a diet with 12% soybean straw plus 28% maize stover (SSD) and a diet with 12% wine distiller’s grain plus 28% maize stover (WDD). Bulls were fed ad libitum an TMR consisting of 36.0% maize, 12.5% of DDGS, 5.0% of cottonseed meal, 4.0% of soybean meal and 40.0% of by-product as described above. Treatment period lasted for 22 weeks, consisting of 1 week of dietary adaptation. The results showed that dry matter intake (DMI) was significantly higher (P < 0.01) for PMSD group than MSD and SSD groups during 0-7 week and 8-14week, and PMSD and WDD groups had higher (P < 0.05) DMI values than MSD and SSD groups during the whole period. Average daily gain (ADG) values were 1.56, 1.72, 1.68 and 1.58 kg for MSD, PMSD, SSD and WDD groups respectively, although the differences were not significant (P > 0.05). The value of blood sugar concentration was significantly higher (P < 0.01) for MSD group than WDD group, and the blood urea nitrogen concentration of SSD group was lower (P < 0.05) than MSD and WDD groups. No significant difference (P > 0.05) of serum total cholesterol, triglycerides or total protein content was observed among the different groups. Ten bulls with similar body weight were selected at the end of feeding trial and slaughtered for measurement of slaughtering performance, carcass quality and meat chemical composition. SSD group had significantly lower (P < 0.05) shear force value and cooking loss than MSD and PMSD groups. The pH values of MSD and SSD groups were lower (P < 0.05) than PMSD and WDD groups. WDD group had a higher fat color brightness (L*) value than PMSD and SSD groups. There were no significant differences in dressing percentage, meat percentage, top grade meat weight, ribeye area, marbling score, meat color and meat chemical compositions among different dietary treatments. Based on these results, the packed maize stover silage showed a potential of improving the average daily gain and feed intake of beef cattle. Soybean straw had a significant effect on improving the tenderness and reducing cooking loss of beef. In general, soybean straw and packed maize stover silage would be beneficial to nitrogen deposition and showed a potential to substitute maize stover in beef cattle diets.

Keywords: beef cattle, by-products, carcass quality, growth performance

Procedia PDF Downloads 478
8 Racial Distress in the Digital Age: A Mixed-Methods Exploration of the Effects of Social Media Exposure to Police Brutality on Black Students

Authors: Amanda M. McLeroy, Tiera Tanksley

Abstract:

The 2020 movement for Black Lives, ignited by anti-Black police brutality and exemplified by the public execution of George Floyd, underscored the dual potential of social media for political activism and perilous exposure to traumatic content for Black students. This study employs Critical Race Technology Theory (CRTT) to scrutinize algorithmic anti-blackness and its impact on Black youth's lives and educational experiences. The research investigates the consequences of vicarious exposure to police brutality on social media among Black adolescents through qualitative interviews and quantitative scale data. The findings reveal an unprecedented surge in exposure to viral police killings since 2020, resulting in profound physical, socioemotional, and educational effects on Black youth. CRTT forms the theoretical basis, challenging the notion of digital technologies as post-racial and neutral, aiming to dismantle systemic biases within digital systems. Black youth, averaging over 13 hours of daily social media use, face constant exposure to graphic images of Black individuals dying. The study connects this exposure to a range of physical, socioemotional, and mental health consequences, emphasizing the urgent need for understanding and support. The research proposes questions to explore the extent of police brutality exposure and its effects on Black youth. Qualitative interviews with high school and college students and quantitative scale data from undergraduates contribute to a nuanced understanding of the impact of police brutality exposure on Black youth. Themes of unprecedented exposure to viral police killings, physical and socioemotional effects, and educational consequences emerge from the analysis. The study uncovers how vicarious experiences of negative police encounters via social media lead to mistrust, fear, and psychosomatic symptoms among Black adolescents. Implications for educators and counselors are profound, emphasizing the cultivation of empathy, provision of mental health support, integration of media literacy education, and encouragement of activism. Recognizing family and community influences is crucial for comprehensive support. Professional development opportunities in culturally responsive teaching and trauma-informed approaches are recommended for educators. In conclusion, creating a supportive educational environment that addresses the emotional impact of social media exposure to police brutality is crucial for the well-being and development of Black adolescents. Counselors, through safe spaces and collaboration, play a vital role in supporting Black youth facing the distressing effects of social media exposure to police brutality.

Keywords: black youth, mental health, police brutality, social media

Procedia PDF Downloads 26
7 Ruminal Fermentation of Biologically Active Nitrate- and Nitro-Containing Forages

Authors: Robin Anderson, David Nisbet

Abstract:

Nitrate, 3-nitro-1-propionic acid (NPA) and 3-nitro-1-propanol (NPOH) are biologically active chemicals that can accumulate naturally in rangeland grasses forages consumed by grazing cattle, sheep and goats. While toxic to livestock if accumulations and amounts consumed are high enough, particularly in animals having no recent exposure to the forages, these chemicals are known to be potent inhibitors of methane-producing bacteria inhabiting the rumen. Consequently, there is interest in examining their potential use as anti-methanogenic compounds to decrease methane emissions by grazing ruminants. Presently, rumen microbes, collected freshly from a cannulated Holstein cow maintained on 50:50 corn based concentrate:alfalfa diet were mixed (10 mL fluid) in 18 x 150 mm crimp top tubes with 0.5 of high nitrate-containing barley (Hordeum vulgare; containing 272 µmol nitrate per g forage dry matter), and NPA- or NPOH- containing milkvetch forages (Astragalus canadensis and Astragalus miser containing 80 and 174 soluble µmol NPA or NPOH/g forage dry matter respectively). Incubations containing 0.5 g alfalfa (Medicago sativa) were used as controls. Tubes (3 per each respective forage) were capped and incubated anaerobically (using oxygen free carbon dioxide) for 24 h at 39oC after which time amounts of total gas produced were measured via volume displacement and headspace samples were analyzed by gas chromatography to determine concentrations of hydrogen and methane. Fluid samples were analyzed by gas chromatography to measure accumulations of fermentation acids. A completely randomized analysis of variance revealed that the nitrate-containing barley and both the NPA- and the NPOH-containing milkvetches significantly decreased methane production, by > 50%, when compared to methane produced by populations incubated similarly with alfalfa (70.4 ± 3.6 µmol/ml incubation fluid). Accumulations of hydrogen, which are typically increased when methane production is inhibited, by incubations with the nitrate-containing barley and the NPA- and NPOH-containing milkvetches did not differ from accumulations observed in the alfalfa controls (0.09 ± 0.04 µmol/mL incubation fluid). Accumulations of fermentation acids produced in the incubations containing the high-nitrate barley and the NPA- and NPOH-containing milkvetches likewise did not differ from accumulations observed in incubations containing alfalfa (123.5 ± 10.8, 36.0 ± 3.0, 17.1 ± 1.5, 3.5 ± 0.3, 2.3 ± 0.2, 2.2 ± 0.2 µmol/mL incubation fluid for acetate, propionate, butyrate, valerate, isobutyrate, and isovalerate, respectively). This finding indicates the microbial populations did not compensate for the decreased methane production via compensatory changes in production of fermentative acids. Stoichiometric estimation of fermentation balance revealed that > 77% of reducing equivalents generated during fermentation of the forages were recovered in fermentation products and the recoveries did not differ between the alfalfa incubations and those with the high-nitrate barley or the NPA- or NPOH-containing milkvetches. Stoichiometric estimates of amounts of hexose fermented similarly did not differ between the nitrate-, NPA and NPOH-containing incubations and those with the alfalfa, averaging 99.6 ± 37.2 µmol hexose consumed/mL of incubation fluid. These results suggest that forages containing nitrate, NPA or NPOH may be useful to reduce methane emissions of grazing ruminants provided risks of toxicity can be effectively managed.

Keywords: nitrate, nitropropanol, nitropropionic acid, rumen methane emissions

Procedia PDF Downloads 97
6 Population Diversity of Dalmatian Pyrethrum Based on Pyrethrin Content and Composition

Authors: Filip Varga, Nina Jeran, Martina Biosic, Zlatko Satovic, Martina Grdisa

Abstract:

Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip.), a species endemic to the eastern Adriatic coastline, is the source of natural insecticide pyrethrin. Pyrethrin is a mixture of six compounds (pyrethrin I and II, cinerin I and II, jasmolin I and II) that exhibits high insecticidal activity with no detrimental effects to the environment. A recently optimized matrix-solid phase dispersion method (MSPD), using florisil as the sorbent, acetone-ethyl acetate (1:1, v/v) as the elution solvent, and sodium sulfate anhydrous as the drying agent was utilized to extract the pyrethrins from 10 wild populations (20 individuals per population) distributed along the Croatian coast. All six components in the extracts were qualitatively and quantitatively determined by high-performance liquid chromatography with a diode array detector (HPLC-DAD). Pearson’s correlation index was calculated between pyrethrin compounds, and differences between the populations using the analysis of variance were tested. Additionally, the correlation of each pyrethrin component with spatio-ecological variables (bioclimate, soil properties, elevation, solar radiation, and distance from the coastline) was calculated. Total pyrethrin content ranged from 0.10% to 1.35% of dry flower weight, averaging 0.58% across all individuals. Analysis of variance revealed significant differences between populations based on all six pyrethrin compounds and total pyrethrin content. On average, the lowest total pyrethrin content was found in the population from Pelješac peninsula (0.22% of dry flower weight) in which total pyrethrin content lower than 0.18% was detected in 55% of the individuals. The highest average total pyrethrin content was observed in the population from island Zlarin (0.87% of dry flower weight), in which total pyrethrin content higher than 1.00% was recorded in only 30% of the individuals. Pyrethrin I/pyrethrin II ratio as a measure of extract quality ranged from 0.21 (population from the island Čiovo) to 5.88 (population from island Mali Lošinj) with an average of 1.77 across all individuals. By far, the lowest quality of extracts was found in the population from Mt. Biokovo (pyrethrin I/II ratio lower than 0.72 in 40% of individuals) due to the high pyrethrin II content typical for this population. Pearson’s correlation index revealed a highly significant positive correlation between pyrethrin I content and total pyrethrin content and a strong negative correlation between pyrethrin I and pyrethrin II. The results of this research clearly indicate high intra- and interpopulation diversity of Dalmatian pyrethrum with regards to pyrethrin content and composition. The information obtained has potential use in plant genetic resources conservation and biodiversity monitoring. Possibly the largest potential lies in designing breeding programs aimed at increasing pyrethrin content in commercial breeding lines and reintroduction in agriculture in Croatia. Acknowledgment: This work has been fully supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir/ Sch. Bip.) insecticidal potential’ - (PyrDiv) (IP-06-2016-9034).

Keywords: Dalmatian pyrethrum, HPLC, MSPD, pyrethrin

Procedia PDF Downloads 111
5 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution

Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko

Abstract:

Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.

Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking

Procedia PDF Downloads 44
4 Wind Tunnel Tests on Ground-Mounted and Roof-Mounted Photovoltaic Array Systems

Authors: Chao-Yang Huang, Rwey-Hua Cherng, Chung-Lin Fu, Yuan-Lung Lo

Abstract:

Solar energy is one of the replaceable choices to reduce the CO2 emission produced by conventional power plants in the modern society. As an island which is frequently visited by strong typhoons and earthquakes, it is an urgent issue for Taiwan to make an effort in revising the local regulations to strengthen the safety design of photovoltaic systems. Currently, the Taiwanese code for wind resistant design of structures does not have a clear explanation on photovoltaic systems, especially when the systems are arranged in arrayed format. Furthermore, when the arrayed photovoltaic system is mounted on the rooftop, the approaching flow is significantly altered by the building and led to different pressure pattern in the different area of the photovoltaic system. In this study, L-shape arrayed photovoltaic system is mounted on the ground of the wind tunnel and then mounted on the building rooftop. The system is consisted of 60 PV models. Each panel model is equivalent to a full size of 3.0 m in depth and 10.0 m in length. Six pressure taps are installed on the upper surface of the panel model and the other six are on the bottom surface to measure the net pressures. Wind attack angle is varied from 0° to 360° in a 10° interval for the worst concern due to wind direction. The sampling rate of the pressure scanning system is set as high enough to precisely estimate the peak pressure and at least 20 samples are recorded for good ensemble average stability. Each sample is equivalent to 10-minute time length in full scale. All the scale factors, including timescale, length scale, and velocity scale, are properly verified by similarity rules in low wind speed wind tunnel environment. The purpose of L-shape arrayed system is for the understanding the pressure characteristics at the corner area. Extreme value analysis is applied to obtain the design pressure coefficient for each net pressure. The commonly utilized Cook-and-Mayne coefficient, 78%, is set to the target non-exceedance probability for design pressure coefficients under Gumbel distribution. Best linear unbiased estimator method is utilized for the Gumbel parameter identification. Careful time moving averaging method is also concerned in data processing. Results show that when the arrayed photovoltaic system is mounted on the ground, the first row of the panels reveals stronger positive pressure than that mounted on the rooftop. Due to the flow separation occurring at the building edge, the first row of the panels on the rooftop is most in negative pressures; the last row, on the other hand, shows positive pressures because of the flow reattachment. Different areas also have different pressure patterns, which corresponds well to the regulations in ASCE7-16 describing the area division for design values. Several minor observations are found according to parametric studies, such as rooftop edge effect, parapet effect, building aspect effect, row interval effect, and so on. General comments are then made for the proposal of regulation revision in Taiwanese code.

Keywords: aerodynamic force coefficient, ground-mounted, roof-mounted, wind tunnel test, photovoltaic

Procedia PDF Downloads 100
3 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 57
2 Analysis of Short Counter-Flow Heat Exchanger (SCFHE) Using Non-Circular Micro-Tubes Operated on Water-CuO Nanofluid

Authors: Avdhesh K. Sharma

Abstract:

Key, in the development of energy-efficient micro-scale heat exchanger devices, is to select large heat transfer surface to volume ratio without much expanse on re-circulated pumps. The increased interest in short heat exchanger (SHE) is due to accessibility of advanced technologies for manufacturing of micro-tubes in range of 1 micron m - 1 mm. Such SHE using micro-tubes are highly effective for high flux heat transfer technologies. Nanofluids, are used to enhance the thermal conductivity of re-circulated coolant and thus enhances heat transfer rate further. Higher viscosity associated with nanofluid expands more pumping power. Thus, there is a trade-off between heat transfer rate and pressure drop with geometry of micro-tubes. Herein, a novel design of short counter flow heat exchanger (SCFHE) using non-circular micro-tubes flooded with CuO-water nanofluid is conceptualized by varying the ratio of surface area to cross-sectional area of micro-tubes. A framework for comparative analysis of SCFHE using micro-tubes non-circular shape flooded by CuO-water nanofluid is presented. In SCFHE concept, micro-tubes having various geometrical shapes (viz., triangular, rectangular and trapezoidal) has been arranged row-wise to facilitate two aspects: (1) allowing easy flow distribution for cold and hot stream, and (2) maximizing the thermal interactions with neighboring channels. Adequate distribution of rows for cold and hot flow streams enables above two aspects. For comparative analysis, a specific volume or cross-section area is assigned to each elemental cell (which includes flow area and area corresponds to half wall thickness). A specific volume or cross-section area is assumed to be constant for each elemental cell (which includes flow area and half wall thickness area) and variation in surface area is allowed by selecting different geometry of micro-tubes in SCFHE. Effective thermal conductivity model for CuO-water nanofluid has been adopted, while the viscosity values for water based nanofluids are obtained empirically. Correlations for Nusselt number (Nu) and Poiseuille number (Po) for micro-tubes have been derived or adopted. Entrance effect is accounted for. Thermal and hydrodynamic performances of SCFHE are defined in terms of effectiveness and pressure drop or pumping power, respectively. For defining the overall performance index of SCFHE, two links are employed. First one relates heat transfer between the fluid streams q and pumping power PP as (=qj/PPj); while another link relates effectiveness eff and pressure drop dP as (=effj/dPj). For analysis, the inlet temperatures of hot and cold streams are varied in usual range of 20dC-65dC. Fully turbulent regime is seldom encountered in micro-tubes and transition of flow regime occurs much early (i.e., ~Re=1000). Thus, Re is fixed at 900, however, the uncertainty in Re due to addition of nanoparticles in base fluid is quantified by averaging of Re. Moreover, for minimizing error, volumetric concentration is limited to range 0% to ≤4% only. Such framework may be helpful in utilizing maximum peripheral surface area of SCFHE without any serious severity on pumping power and towards developing advanced short heat exchangers.

Keywords: CuO-water nanofluid, non-circular micro-tubes, performance index, short counter flow heat exchanger

Procedia PDF Downloads 188
1 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 19