Search results for: active and reactive power
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10151

Search results for: active and reactive power

101 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 148
100 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality

Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan

Abstract:

Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.

Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application

Procedia PDF Downloads 74
99 Application of Alumina-Aerogel in Post-Combustion CO₂ Capture: Optimization by Response Surface Methodology

Authors: S. Toufigh Bararpour, Davood Karami, Nader Mahinpey

Abstract:

Dependence of global economics on fossil fuels has led to a large growth in the emission of greenhouse gases (GHGs). Among the various GHGs, carbon dioxide is the main contributor to the greenhouse effect due to its huge emission amount. To mitigate the threatening effect of CO₂, carbon capture and sequestration (CCS) technologies have been studied widely in recent years. For the combustion processes, three main CO₂ capture techniques have been proposed such as post-combustion, pre-combustion and oxyfuel combustion. Post-combustion is the most commonly used CO₂ capture process as it can be readily retrofit into the existing power plants. Multiple advantages have been reported for the post-combustion by solid sorbents such as high CO₂ selectivity, high adsorption capacity, and low required regeneration energy. Chemical adsorption of CO₂ over alkali-metal-based solid sorbents such as K₂CO₃ is a promising method for the selective capture of diluted CO₂ from the huge amount of nitrogen existing in the flue gas. To improve the CO₂ capture performance, K₂CO₃ is supported by a stable and porous material. Al₂O₃ has been employed commonly as the support and enhanced the cyclic CO₂ capture efficiency of K₂CO₃. Different phases of alumina can be obtained by setting the calcination temperature of boehmite at 300, 600 (γ-alumina), 950 (δ-alumina) and 1200 °C (α-alumina). By increasing the calcination temperature, the regeneration capacity of alumina increases, while the surface area reduces. However, sorbents with lower surface areas have lower CO₂ capture capacity as well (except for the sorbents prepared by hydrophilic support materials). To resolve this issue, a highly efficient alumina-aerogel support was synthesized with a BET surface area of over 2000 m²/g and then calcined at a high temperature. The synthesized alumina-aerogel was impregnated on K₂CO₃ based on 50 wt% support/K₂CO₃, which resulted in the preparation of a sorbent with remarkable CO₂ capture performance. The effect of synthesis conditions such as types of alcohols, solvent-to-co-solvent ratios, and aging times was investigated on the performance of the support. The best support was synthesized using methanol as the solvent, after five days of aging time, and at a solvent-to-co-solvent (methanol-to-toluene) ratio (v/v) of 1/5. Response surface methodology was used to investigate the effect of operating parameters such as carbonation temperature and H₂O-to-CO₂ flowrate ratio on the CO₂ capture capacity. The maximum CO₂ capture capacity, at the optimum amounts of operating parameters, was 7.2 mmol CO₂ per gram K₂CO₃. Cyclic behavior of the sorbent was examined over 20 carbonation and regenerations cycles. The alumina-aerogel-supported K₂CO₃ showed a great performance compared to unsupported K₂CO₃ and γ-alumina-supported K₂CO₃. Fundamental performance analyses and long-term thermal and chemical stability test will be performed on the sorbent in the future. The applicability of the sorbent for a bench-scale process will be evaluated, and a corresponding process model will be established. The fundamental material knowledge and respective process development will be delivered to industrial partners for the design of a pilot-scale testing unit, thereby facilitating the industrial application of alumina-aerogel.

Keywords: alumina-aerogel, CO₂ capture, K₂CO₃, optimization

Procedia PDF Downloads 116
98 Functional Plasma-Spray Ceramic Coatings for Corrosion Protection of RAFM Steels in Fusion Energy Systems

Authors: Chen Jiang, Eric Jordan, Maurice Gell, Balakrishnan Nair

Abstract:

Nuclear fusion, one of the most promising options for reliably generating large amounts of carbon-free energy in the future, has seen a plethora of ground-breaking technological advances in recent years. An efficient and durable “breeding blanket”, needed to ensure a reactor’s self-sufficiency by maintaining the optimal coolant temperature as well as by minimizing radiation dosage behind the blanket, still remains a technological challenge for the various reactor designs for commercial fusion power plants. A relatively new dual-coolant lead-lithium (DCLL) breeder design has exhibited great potential for high-temperature (>700oC), high-thermal-efficiency (>40%) fusion reactor operation. However, the structural material, namely reduced activation ferritic-martensitic (RAFM) steel, is not chemically stable in contact with molten Pb-17%Li coolant. Thus, to utilize this new promising reactor design, the demand for effective corrosion-resistant coatings on RAFM steels represents a pressing need. Solution Spray Technologies LLC (SST) is developing a double-layer ceramic coating design to address the corrosion protection of RAFM steels, using a novel solution and solution/suspension plasma spray technology through a US Department of Energy-funded project. Plasma spray is a coating deposition method widely used in many energy applications. Novel derivatives of the conventional powder plasma spray process, known as the solution-precursor and solution/suspension-hybrid plasma spray process, are powerful methods to fabricate thin, dense ceramic coatings with complex compositions necessary for the corrosion protection in DCLL breeders. These processes can be used to produce ultra-fine molten splats and to allow fine adjustment of coating chemistry. Thin, dense ceramic coatings with chosen chemistry for superior chemical stability in molten Pb-Li, low activation properties, and good radiation tolerance, is ideal for corrosion-protection of RAFM steels. A key challenge is to accommodate its CTE mismatch with the RAFM substrate through the selection and incorporation of appropriate bond layers, thus allowing for enhanced coating durability and robustness. Systematic process optimization is being used to define the optimal plasma spray conditions for both the topcoat and bond-layer, and X-ray diffraction and SEM-EDS are applied to successfully validate the chemistry and phase composition of the coatings. The plasma-sprayed double-layer corrosion resistant coatings were also deposited onto simulated RAFM steel substrates, which are being tested separately under thermal cycling, high-temperature moist air oxidation as well as molten Pb-Li capsule corrosion conditions. Results from this testing on coated samples, and comparisons with bare RAFM reference samples will be presented and conclusions will be presented assessing the viability of the new ceramic coatings to be viable corrosion prevention systems for DCLL breeders in commercial nuclear fusion reactors.

Keywords: breeding blanket, corrosion protection, coating, plasma spray

Procedia PDF Downloads 309
97 Bio-Hub Ecosystems: Profitability through Circularity for Sustainable Forestry, Energy, Agriculture and Aquaculture

Authors: Kimberly Samaha

Abstract:

The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding biomass as a feedstock for power plants. Yet the lack of an economically-viable business model for bioenergy facilities has resulted in the continuation of idled and decommissioned plants. This study analyzed data and submittals to the Born Global Maine Innovation Challenge. The Innovation Challenge was a global innovation challenge to identify process innovations that could address a ‘whole-tree’ approach of maximizing the products, byproducts, energy value and process slip-streams into a circular zero-waste design. Participating companies were at various stages of developing bioproducts and included biofuels, lignin-based products, carbon capture platforms and biochar used as both a filtration medium and as a soil amendment product. This case study shows the QCA (Qualitative Comparative Analysis) methodology of the prequalification process and the resulting techno-economic model that was developed for the maximizing profitability of the Bio-Hub Ecosystem through continuous expansion of system waste streams into valuable process inputs for co-hosts. A full site plan for the integration of co-hosts (biorefinery, land-based shrimp and salmon aquaculture farms, a tomato green-house and a hops farm) at an operating forestry-based biomass to energy plant in West Enfield, Maine USA. This model and process for evaluating the profitability not only proposes models for integration of forestry, aquaculture and agriculture in cradle-to-cradle linkages of what have typically been linear systems, but the proposal also allows for the early measurement of the circularity and impact of resource use and investment risk mitigation, for these systems. In this particular study, profitability is assessed at two levels CAPEX (Capital Expenditures) and in OPEX (Operating Expenditures). Given that these projects start with repurposing facilities where the industrial level infrastructure is already built, permitted and interconnected to the grid, the addition of co-hosts first realizes a dramatic reduction in permitting, development times and costs. In addition, using the biomass energy plant’s waste streams such as heat, hot water, CO₂ and fly ash as valuable inputs to their operations and a significant decrease in the OPEX costs, increasing overall profitability to each of the co-hosts bottom line. This case study utilizes a proprietary techno-economic model to demonstrate how utilizing waste streams of a biomass energy plant and/or biorefinery, results in significant reduction in OPEX for both the biomass plants and the agriculture and aquaculture co-hosts. Economically viable Bio-Hubs with favorable environmental and community impacts may prove critical in garnering local and federal government support for pilot programs and more wide-scale adoption, especially for those living in severely economically depressed rural areas where aging industrial sites have been shuttered and local economies devastated.

Keywords: bio-economy, biomass energy, financing, zero-waste

Procedia PDF Downloads 134
96 Effect of the Incorporation of Modified Starch on the Physicochemical Properties and Consumer Acceptance of Puff Pastry

Authors: Alejandra Castillo-Arias, Santiago Amézquita-Murcia, Golber Carvajal-Lavi, Carlos M. Zuluaga-Domínguez

Abstract:

The intricate relationship between health and nutrition has driven the food industry to seek healthier and more sustainable alternatives. A key strategy currently employed is the reduction of saturated fats and the incorporation of ingredients that align with new consumer trends. Modified starch, a polysaccharide widely used in baking, also serves as a functional ingredient to boost dietary fiber content. However, its use in puff pastry remains challenging due to the technological difficulties in achieving a buttery pastry with the necessary strength to create thin, flaky layers. This study explored the potential of incorporating modified starch into puff pastry formulations. To evaluate the physicochemical properties of wheat flour mixed with modified starch, five different flour samples were prepared: T1, T2, T3, and T4, containing 10g, 20g, 30g, and 40g of modified starch per 100 g mixture, respectively, alongside a control sample (C) with no added starch. The analysis focused on various physicochemical indices, including the Water Absorption Index (WAI), Water Solubility Index (WSI), Swelling Power (SP), and Water Retention Capacity (WRC). The puff pastry was further characterized by color measurement and sensory analysis. For the preparation of the puff pastry dough, the flour, modified starch, and salt were mixed, followed by the addition of water until a homogenous dough was achieved. The margarine was later incorporated into the dough, which was folded and rolled multiple times to create the characteristic layers of puff pastry. The dough was then cut into equal pieces, baked at 170°C, and allowed to cool. The results indicated that the addition of modified starch did not significantly alter the specific volume or texture of the puff pastries, as reflected by the stable WAI and SP values across the samples. However, the WRC increased with higher starch content, highlighting the hydrophilic nature of the modified starch, which necessitated additional water during dough preparation. Color analysis revealed significant variations in the L* (lightness) and a* (red-green) parameters, with no consistent relationship between the modified starch treatments and the control. However, the b* (yellow-blue) parameter showed a strong correlation across most samples, except for treatment T3. Thus, modified starch affected the a* component of the CIELAB color spectrum, influencing the reddish hue of the puff pastries. Variations in baking time due to increased water content in the dough likely contributed to differences in lightness among the samples. Sensory analysis revealed that consumers preferred the sample with a 20% starch substitution (T2), which was rated similarly to the control in terms of texture. However, treatment T3 exhibited unusual behavior in texture analysis, and the color analysis showed that treatment T1 most closely resembled the control, indicating that starch addition is most noticeable to consumers in the visual aspect of the product. In conclusion, while the modified starch successfully maintained the desired texture and internal structure of puff pastry, its impact on water retention and color requires careful consideration in product formulation. This study underscores the importance of balancing product quality with consumer expectations when incorporating modified starches in baked goods.

Keywords: consumer preferences, modified starch, physicochemical properties, puff pastry

Procedia PDF Downloads 27
95 Common Space Production as a Solution to the Affordable Housing Problem: Its Relationship with the Squating Process in Turkey

Authors: Gözde Arzu Sarıcan

Abstract:

Contemporary urbanization processes and spatial transformations are intensely debated across various fields of social sciences. One prominent concept in these discussions is "common spaces." Common spaces offer a critical theoretical framework, particularly for addressing the social and economic inequalities brought about by urbanization. This study examines the processes of commoning and their impacts through the lens of squatter neighborhoods in Turkey, emphasizing the importance of affordable housing. It focuses on the role and significance of these neighborhoods in the formation of common spaces, analyzing the collective actions and resistance strategies of residents. This process, which began with the construction of shelters to meet the shelter needs of low-income households migrating from rural to urban areas, has turned into low-quality squatter settlements over time. For low-income households lacking the economic power to rent or buy homes in the city, these areas provided an affordable housing solution. Squatter neighborhoods reflect the efforts of local communities to protect and develop their communal living spaces through collective actions and resistance strategies. This collective creation process involves the appropriation of occupied land as a common resource through the rules established by the commons. Organized occupations subdivide these lands, shaped through collective creation processes. For the squatter communities striving for economic and social adaptation, these areas serve as buffer zones for urban integration. In squatter neighborhoods, bonds of friendship, kinship, and compatriotism are strong, playing a significant role in the creation and dissemination of collective knowledge. Squatter areas can be described as common spaces that emerge out of necessity for low-income and marginalized groups. The design and construction of housing in squatter neighborhoods are shaped by the collective participation and skills of the residents. Streets are formed through collective decision-making and labor. Over time, the demands for housing are communicated to local authorities, enhancing the potential for commoning. Common spaces are shaped by collective needs and demands, appropriated, and transformed into potential new spaces. Common spaces are continually redefined and recreated. In this context, affordable housing becomes an essential aspect of these common spaces, providing a foundation for social and economic stability. This study evaluates the processes of commoning and their effects through the lens of squatter neighborhoods in Turkey. Communities living in squatter neighborhoods have managed to create and protect communal living spaces, especially in situations where official authorities have been inadequate. Common spaces are built on values such as solidarity, cooperation, and collective resistance. In urban planning and policy development processes, it is crucial to consider the concept of common spaces. Policies that support the collective efforts and resistance strategies of communities can contribute to more just and sustainable living conditions in urban areas. In this context, the concept of common spaces is considered an important tool in the fight against urban inequalities and in the expression and defense mechanisms of communities. By emphasizing the importance of affordable housing within these spaces, this study highlights the critical role of common spaces in addressing urban social and economic challenges.

Keywords: affordable housing, common space, squating process, turkey

Procedia PDF Downloads 33
94 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 70
93 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics

Authors: Varun Kumar, Chandra Shakher

Abstract:

Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.

Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy

Procedia PDF Downloads 499
92 The Effects of Branding on Profitability of Banks in Ghana

Authors: Evans Oteng, Clement Yeboah, Alexander Otechere-Fianko

Abstract:

In today’s economy, despite achievements and advances in the banking and financial institutions, there are challenges that will require intensive attempts on the portion of the banks in Ghana. The perceived decline in profitability of banks seems to have emanated from ineffective branding. Hence, the purpose of this quantitative descriptive-correlational study was to examine the effects of branding on the profitability of banks in Ghana. The researchers purposively sampled some 116 banks in Ghana. Self-developed Likert scale questionnaires were administered to the finance officers of the financial institutions. The results were found to be statistically significant, F (1, 114) = 4. 50, p = .036. This indicates that those banks in Ghana with good branding practices have strong marketing tools to identify and sell their products and services and, as such, have a big market share. The correlation coefficients indicate that branding has a positive correlation with profitability and are statistically significant (r=.207, p<0.05), which signifies that as branding increases, the return on equity’s profitability indicator improves and vice versa. Future researchers can consider other factors beyond branding, such as online banking. The study has significant implications for the success and competitive advantage of those banks that effective branding allows them to differentiate themselves from their competitors. A strong and unique brand identity can help a bank stand out in a crowded market, attract customers, and build customer loyalty. This can lead to increased market share and profitability. Branding influences customer perception and trust. A well-established and reputable brand can create a positive image in the minds of customers, enhancing their confidence in the bank's products and services. This can result in increased customer acquisition, customer retention and a positive impact on profitability. Banks with strong brands can leverage their reputation and customer trust to cross-sell additional products and services. When customers have confidence in the brand, they are more likely to explore and purchase other offerings from the same institution. Cross-selling can boost revenue streams and profitability. Successful branding can open up opportunities for brand extensions and diversification into new products or markets. Banks can leverage their trusted brand to introduce new financial products or expand their presence into related areas, such as insurance or investment services. This can lead to additional revenue streams and improved profitability. This study can have implications for education. Thus, increased profitability of banks due to effective branding can result in higher financial resources available for corporate social responsibility (CSR) activities. Banks may invest in educational initiatives, such as scholarships, grants, research projects, and sponsorships, to support the education sector in Ghana. Also, this study can have implications for logistics and supply chain management. Thus, strong branding can create trust and credibility among customers, leading to increased customer loyalty. This loyalty can positively impact the bank's relationships with its suppliers and logistics partners. It can result in better negotiation power, improved supplier relationships, and enhanced supply chain coordination, ultimately leading to more efficient and cost-effective logistics operations.

Keywords: branding, profitability, competitors, customer loyalty, customer retention, corporate social responsibility, cost-effective, logistics operations

Procedia PDF Downloads 77
91 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 91
90 Optical-Based Lane-Assist System for Rowing Boats

Authors: Stephen Tullis, M. David DiDonato, Hong Sung Park

Abstract:

Rowing boats (shells) are often steered by a small rudder operated by one of the backward-facing rowers; the attention required of that athlete then slightly decreases the power that that athlete can provide. Reducing the steering distraction would then increase the overall boat speed. Races are straight 2000 m courses with each boat in a 13.5 m wide lane marked by small (~15 cm) widely-spaced (~10 m) buoys, and the boat trajectory is affected by both cross-currents and winds. An optical buoy recognition and tracking system has been developed that provides the boat’s location and orientation with respect to the lane edges. This information is provided to the steering athlete as either: a simple overlay on a video display, or fed to a simplified autopilot system giving steering directions to the athlete or directly controlling the rudder. The system is then effectively a “lane-assist” device but with small, widely-spaced lane markers viewed from a very shallow angle due to constraints on camera height. The image is captured with a lightweight 1080p webcam, and most of the image analysis is done in OpenCV. The colour RGB-image is converted to a grayscale using the difference of the red and blue channels, which provides good contrast between the red/yellow buoys and the water, sky, land background and white reflections and noise. Buoy detection is done with thresholding within a tight mask applied to the image. Robust linear regression using Tukey’s biweight estimator of the previously detected buoy locations is used to develop the mask; this avoids the false detection of noise such as waves (reflections) and, in particular, buoys in other lanes. The robust regression also provides the current lane edges in the camera frame that are used to calculate the displacement of the boat from the lane centre (lane location), and its yaw angle. The interception of the detected lane edges provides a lane vanishing point, and yaw angle can be calculated simply based on the displacement of this vanishing point from the camera axis and the image plane distance. Lane location is simply based on the lateral displacement of the vanishing point from any horizontal cut through the lane edges. The boat lane position and yaw are currently fed what is essentially a stripped down marine auto-pilot system. Currently, only the lane location is used in a PID controller of a rudder actuator with integrator anti-windup to deal with saturation of the rudder angle. Low Kp and Kd values decrease unnecessarily fast return to lane centrelines and response to noise, and limiters can be used to avoid lane departure and disqualification. Yaw is not used as a control input, as cross-winds and currents can cause a straight course with considerable yaw or crab angle. Mapping of the controller with rudder angle “overall effectiveness” has not been finalized - very large rudder angles stall and have decreased turning moments, but at less extreme angles the increased rudder drag slows the boat and upsets boat balance. The full system has many features similar to automotive lane-assist systems, but with the added constraints of the lane markers, camera positioning, control response and noise increasing the challenge.

Keywords: auto-pilot, lane-assist, marine, optical, rowing

Procedia PDF Downloads 132
89 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach

Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert

Abstract:

Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.

Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems

Procedia PDF Downloads 150
88 Official Seals on the Russian-Qing Treaties: Material Manifestations and Visual Enunciations

Authors: Ning Chia

Abstract:

Each of the three different language texts (Manchu, Russian, and Latin) of the 1689 Treaty of Nerchinsk bore official seals from Imperial Russia and Qing China. These seals have received no academic attention, yet they can reveal a site of a layered and shared material, cultural, political, and diplomatic world of the time in Eastern Eurasia. The very different seal selections from both empires while ratifying the Treaty of Beijing in 1860 have obtained no scholarly advertency either; they can also explicate a tremendously changed relationship with visual and material manifestation. Exploring primary sources in Manchu, Russian, and Chinese languages as well as the images of the visual seals, this study investigates the reasons and purposes of utilizing official seals for the treaty agreement. A refreshed understanding of Russian-Qing diplomacy will be developed by pursuing the following aspects: (i) Analyzing the iconographic meanings of each seal insignia and unearthing a competitive, yet symbols-delivered and seal-generated, 'dialogue' between the two empires (ii) Contextualizing treaty seals within the historical seal cultures, and discovering how domestic seal system in each empire’s political institution developed into treaty-defined bilateral relations (iii) Expounding the seal confiding in each empire’s daily governing routines, and annotating the trust in the seal as a quested promise from the opponent negotiator to fulfill the treaty terms (iv) Contrasting the two seal traditions along two civilization-lines, Eastern vs. Western, and dissecting how the two styles of seal emblems affected the cross-cultural understanding or misunderstanding between the two empires (v) Comprehending the history-making events from the substantial resources such as the treaty seals, and grasping why the seals for the two treaties, so different in both visual design and symbolic value, were chosen in the two relationship eras (vi) Correlating the materialized seal 'expression' and the imperial worldviews based on each empire’s national/or power identity, and probing the seal-represented 'rule under the Heaven' assumption of China and Russian rising role in 'European-American imperialism … centered on East Asia' (Victor Shmagin, 2020). In conclusion, the impact of official seals on diplomatic treaties needs profound knowledge in seal history, insignia culture, and emblem belief to be able to comprehend. The official seals in both Imperial Russia and Qing China belonged to a particular statecraft art in a specific material and visual form. Once utilized in diplomatic treaties, the meticulously decorated and politically institutionalized seals were transformed from the determinant means for domestic administration and social control into the markers of an empire’s sovereign authority. Overlooked in historical practice, the insignia seal created a wire of 'visual contest' between the two rival powers. Through this material lens, the scholarly knowledge of the Russian-Qing diplomatic relationship will be significantly upgraded. Connecting Russian studies, Qing/Chinese studies, and Eurasian studies, this study also ties material culture, political culture, and diplomatic culture together. It promotes the study of official seals and emblem symbols in worldwide diplomatic history.

Keywords: Russia-Qing diplomatic relation, Treaty of Beijing (1860), Treaty of Nerchinsk (1689), Treaty seals

Procedia PDF Downloads 207
87 Signature Bridge Design for the Port of Montreal

Authors: Juan Manuel Macia

Abstract:

The Montreal Port Authority (MPA) wanted to build a new road link via Souligny Avenue to increase the fluidity of goods transported by truck in the Viau Street area of Montreal and to mitigate the current traffic problems on Notre-Dame Street. With the purpose of having a better integration and acceptance of this project with the neighboring residential surroundings, this project needed to include an architectural integration, bringing some artistic components to the bridge design along with some landscaping components. The MPA is required primarily to provide direct truck access to Port of Montreal with a direct connection to the future Assomption Boulevard planned by the City of Montreal and, thus, direct access to Souligny Avenue. The MPA also required other key aspects to be considered for the proposal and development of the project, such as the layout of road and rail configurations, the reconstruction of underground structures, the relocation of power lines, the installation of lighting systems, the traffic signage and communication systems improvement, the construction of new access ramps, the pavement reconstruction and a summary assessment of the structural capacity of an existing service tunnel. The identification of the various possible scenarios began by identifying all the constraints related to the numerous infrastructures located in the area of the future link between the port and the future extension of Souligny Avenue, involving interaction with several disciplines and technical specialties. Several viaduct- and tunnel-type geometries were studied to link the port road to the right-of-way north of Notre-Dame Street and to improve traffic flow at the railway corridor. The proposed design took into account the existing access points to Port of Montreal, the built environment of the MPA site, the provincial and municipal rights-of-way, and the future Notre-Dame Street layout planned by the City of Montreal. These considerations required the installation of an engineering structure with a span of over 60 m to free up a corridor for the future urban fabric of Notre-Dame Street. The best option for crossing this span length was identified by the design and construction of a curved bridge over Notre-Dame Street, which is essentially a structure with a deck formed by a reinforced concrete slab on steel box girders with a single span of 63.5m. The foundation units were defined as pier-cap type abutments on drilled shafts to bedrock with rock sockets, with MSE-type walls at the approaches. The configuration of a single-span curved structure posed significant design and construction challenges, considering the major constraints of the project site, a design for durability approach, and the need to guarantee optimum performance over a 75-year service life in accordance with the client's needs and the recommendations and requirements defined by the standards used for the project. These aspects and the need to include architectural and artistic components in this project made it possible to design, build, and integrate a signature infrastructure project with a sustainable approach, from which the MPA, the commuters, and the city of Montreal and its residents will benefit.

Keywords: curved bridge, steel box girder, medium span, simply supported, industrial and urban environment, architectural integration, design for durability

Procedia PDF Downloads 71
86 Exploring the Cultural Values of Nursing Personnel Utilizing Hofstede's Cultural Dimensions

Authors: Ma Chu Jui

Abstract:

Culture plays a pivotal role in shaping societal responses to change and fostering adaptability. In the realm of healthcare provision, hospitals serve as dynamic settings molded by the cultural consciousness of healthcare professionals. This intricate interplay extends to their expectations of leadership, communication styles, and attitudes towards patient care. Recognizing the cultural inclinations of healthcare professionals becomes imperative in navigating this complex landscape. This study will utilize Hofstede's Value Survey Module 2013 (VSM 2013) as a comprehensive analytical tool. The targeted participants for this research are in-service nursing professionals with a tenure of at least three months, specifically employed in the nursing department of an Eastern hospital. This quantitative approach seeks to quantify diverse cultural tendencies among the targeted nursing professionals, elucidating not only abstract cultural concepts but also revealing their cultural inclinations across different dimensions. The study anticipates gathering between 400 to 500 responses, ensuring a robust dataset for a comprehensive analysis. The focused approach on nursing professionals within the Eastern hospital setting enhances the relevance and specificity of the cultural insights obtained. The research aims to contribute valuable knowledge to the understanding of cultural tendencies among in-service nursing personnel in the nursing department of this specific Eastern hospital. The VSM 2013 will be initially distributed to this specific group to collect responses, aiming to calculate scores on each of Hofstede's six cultural dimensions—Power Distance Index (PDI), Individualism vs. Collectivism (IDV), Uncertainty Avoidance Index (UAI), Masculinity vs. Femininity (MAS), Long-Term Orientation vs. Short-Term Normative Orientation (LTO), and Indulgence vs. Restraint (IVR). the study unveils a significant correlation between different cultural dimensions and healthcare professionals' tendencies in understanding leadership expectations through PDI, grasping behavioral patterns via IDV, acknowledging risk acceptance through UAI, and understanding their long-term and short-term behaviors through LTO. These tendencies extend to communication styles and attitudes towards patient care. These findings provide valuable insights into the nuanced interconnections between cultural factors and healthcare practices. Through a detailed analysis of the varying levels of these cultural dimensions, we gain a comprehensive understanding of the predominant inclinations among the majority of healthcare professionals. This nuanced perspective adds depth to our comprehension of how cultural values shape their approach to leadership, communication, and patient care, contributing to a more holistic understanding of the healthcare landscape. A profound comprehension of the cultural paradigms embraced by healthcare professionals holds transformative potential. Beyond a mere understanding, it acts as a catalyst for elevating the caliber of healthcare services. This heightened awareness fosters cohesive collaboration among healthcare teams, paving the way for the establishment of a unified healthcare ethos. By cultivating shared values, our study envisions a healthcare environment characterized by enhanced quality, improved teamwork, and ultimately, a more favorable and patient-centric healthcare landscape. In essence, our research underscores the critical role of cultural awareness in shaping the future of healthcare delivery.

Keywords: hofstede's cultural, cultural dimensions, cultural values in healthcare, cultural awareness in nursing

Procedia PDF Downloads 65
85 Dietary Diversification and Nutritional Education: A Strategy to Improve Child Food Security Status in the Rural Mozambique

Authors: Rodriguez Diego, Del Valle Martin, Hargreaves Matias, Riveros Jose Luis

Abstract:

Nutrient deficiencies due to a diet low in quantitative and qualitative terms, are prevalent throughout the developing world, especially in sub-Saharan Africa. Children and women of childbearing age are especially vulnerable. Limited availability, access and intake of animal foods at home and lack of knowledge about their value in the diet and the role they play in health, contribute to poor diet quality. Poor bioavailability of micronutrients in diets based on foods high in fiber and phytates, the low content of some micronutrients in these foods are further factors to consider. Goats are deeply embedded in almost every Sub-Saharan African rural culture, generally kept for their milk, meat, hair or leather. Goats have played an important role in African social life, especially in food security. Goat meat has good properties for human wellbeing, with a special role in lower income households. It has a high-quality protein (20 protein g/100 meat g) including all essential amino acids, good unsaturated/satured fatty acids relationship, and it is an important B-vitamin source with high micronutrients bioavailability. Mozambique has major food security problems, with poor food access and utilization, undiversified diets, chronic poverty and child malnutrition. Our objective was to design a nutritional intervention based on a dietary diversification, nutritional education, cultural beliefs and local resources, aimed to strengthen food security of children at Barrio Broma village (15°43'58.78"S; 32°46'7.27"E) in Chitima, Mozambique. Two surveys were conducted first of socio-productive local databases and then to 100 rural households about livelihoods, food diversity and anthropometric measurements in children under 5 years. Our results indicate that the main economic activity is goat production, based on a native breed with two deliveries per year in the absence of any management. Adult goats weighted 27.2±10.5 kg and raised a height of 63.5±3.8 cm. Data showed high levels of poverty, with a food diversity score of 2.3 (0-12 points), where only 30% of households consume protein and 13% iron, zinc, and B12 vitamin. The main constraints to food security were poor access to water and low income to buy food. Our dietary intervention was based on improving diet quality by increasing the access to dried goat meat, fresh vegetables, and legumes, and its utilization by a nutritional education program. This proposal was based on local culture and living conditions characterized by the absence of electricity power and drinkable water. The drying process proposed would secure the food maintenance under local conditions guaranteeing food safety for a longer period. Additionally, an ancient local drying technique was rescued and used. Moreover, this kind of dietary intervention would be the most efficient way to improve the infant nutrition by delivering macro and micronutrients on time to these vulnerable populations.

Keywords: child malnutrition, dietary diversification, food security, goat meat

Procedia PDF Downloads 303
84 An Interdisciplinary Maturity Model for Accompanying Sustainable Digital Transformation Processes in a Smart Residential Quarter

Authors: Wesley Preßler, Lucie Schmidt

Abstract:

Digital transformation is playing an increasingly important role in the development of smart residential quarters. In order to accompany and steer this process and ultimately make the success of the transformation efforts measurable, it is helpful to use an appropriate maturity model. However, conventional maturity models for digital transformation focus primarily on the evaluation of processes and neglect the information and power imbalances between the stakeholders, which affects the validity of the results. The Multi-Generation Smart Community (mGeSCo) research project is developing an interdisciplinary maturity model that integrates the dimensions of digital literacy, interpretive patterns, and technology acceptance to address this gap. As part of the mGeSCo project, the technological development of selected dimensions in the Smart Quarter Jena-Lobeda (Germany) is being investigated. A specific maturity model, based on Cohen's Smart Cities Wheel, evaluates the central dimensions Working, Living, Housing and Caring. To improve the reliability and relevance of the maturity assessment, the factors Digital Literacy, Interpretive Patterns and Technology Acceptance are integrated into the developed model. The digital literacy dimension examines stakeholders' skills in using digital technologies, which influence their perception and assessment of technological maturity. Digital literacy is measured by means of surveys, interviews, and participant observation, using the European Commission's Digital Literacy Framework (DigComp) as a basis. Interpretations of digital technologies provide information about how individuals perceive technologies and ascribe meaning to them. However, these are not mere assessments, prejudices, or stereotyped perceptions but collective patterns, rules, attributions of meaning and the cultural repertoire that leads to these opinions and attitudes. Understanding these interpretations helps in assessing the overarching readiness of stakeholders to digitally transform a/their neighborhood. This involves examining people's attitudes, beliefs, and values about technology adoption, as well as their perceptions of the benefits and risks associated with digital tools. These insights provide important data for a holistic view and inform the steps needed to prepare individuals in the neighborhood for a digital transformation. Technology acceptance is another crucial factor for successful digital transformation to examine the willingness of individuals to adopt and use new technologies. Surveys or questionnaires based on Davis' Technology Acceptance Model can be used to complement interpretive patterns to measure neighborhood acceptance of digital technologies. Integrating the dimensions of digital literacy, interpretive patterns and technology acceptance enables the development of a roadmap with clear prerequisites for initiating a digital transformation process in the neighborhood. During the process, maturity is measured at different points in time and compared with changes in the aforementioned dimensions to ensure sustainable transformation. Participation, co-creation, and co-production are essential concepts for a successful and inclusive digital transformation in the neighborhood context. This interdisciplinary maturity model helps to improve the assessment and monitoring of sustainable digital transformation processes in smart residential quarters. It enables a more comprehensive recording of the factors that influence the success of such processes and supports the development of targeted measures to promote digital transformation in the neighborhood context.

Keywords: digital transformation, interdisciplinary, maturity model, neighborhood

Procedia PDF Downloads 77
83 Green Building Risks: Limits on Environmental and Health Quality Metrics for Contractors

Authors: Erica Cochran Hameen, Bobuchi Ken-Opurum, Mounica Guturu

Abstract:

The United Stated (U.S.) populous spends the majority of their time indoors in spaces where building codes and voluntary sustainability standards provide clear Indoor Environmental Quality (IEQ) metrics. The existing sustainable building standards and codes are aimed towards improving IEQ, health of occupants, and reducing the negative impacts of buildings on the environment. While they address the post-occupancy stage of buildings, there are fewer standards on the pre-occupancy stage thereby placing a large labor population in environments much less regulated. Construction personnel are often exposed to a variety of uncomfortable and unhealthy elements while on construction sites, primarily thermal, visual, acoustic, and air quality related. Construction site power generators, equipment, and machinery generate on average 9 decibels (dBA) above the U.S. OSHA regulations, creating uncomfortable noise levels. Research has shown that frequent exposure to high noise levels leads to chronic physiological issues and increases noise induced stress, yet beyond OSHA no other metric focuses directly on the impacts of noise on contractors’ well-being. Research has also associated natural light with higher productivity and attention span, and lower cases of fatigue in construction workers. However, daylight is not always available as construction workers often perform tasks in cramped spaces, dark areas, or at nighttime. In these instances, the use of artificial light is necessary, yet lighting standards for use during lengthy tasks and arduous activities is not specified. Additionally, ambient air, contaminants, and material off-gassing expelled at construction sites are one of the causes of serious health effects in construction workers. Coupled with extreme hot and cold temperatures for different climate zones, health and productivity can be seriously compromised. This research evaluates the impact of existing green building metrics on construction and risk management, by analyzing two codes and nine standards including LEED, WELL, and BREAM. These metrics were chosen based on the relevance to the U.S. construction industry. This research determined that less than 20% of the sustainability context within the standards and codes (texts) are related to the pre-occupancy building sector. The research also investigated the impact of construction personnel’s health and well-being on construction management through two surveys of project managers and on-site contractors’ perception of their work environment on productivity. To fully understand the risks of limited Environmental and Health Quality metrics for contractors (EHQ) this research evaluated the connection between EHQ factors such as inefficient lighting, on construction workers and investigated the correlation between various site coping strategies for comfort and productivity. Outcomes from this research are three-pronged. The first includes fostering a discussion about the existing conditions of EQH elements, i.e. thermal, lighting, ergonomic, acoustic, and air quality on the construction labor force. The second identifies gaps in sustainability standards and codes during the pre-occupancy stage of building construction from ground-breaking to substantial completion. The third identifies opportunities for improvements and mitigation strategies to improve EQH such as increased monitoring of effects on productivity and health of contractors and increased inclusion of the pre-occupancy stage in green building standards.

Keywords: construction contractors, health and well-being, environmental quality, risk management

Procedia PDF Downloads 132
82 Continued usage of Wearable FItness Technology: An Extended UTAUT2 Model Perspective

Authors: Rasha Elsawy

Abstract:

Aside from the rapid growth of global information technology and the Internet, another key trend is the swift proliferation of wearable technologies. The future of wearable technologies is very bright as an emerging revolution in this technological world. Beyond this, individual continuance intention toward IT is an important area that drew academics' and practitioners' attention. The literature review exhibits that continuance usage is an important concern that needs to be addressed for any technology to be advantageous and for consumers to succeed. However, consumers noticeably abandon their wearable devices soon after purchase, losing all subsequent benefits that can only be achieved through continued usage. Purpose-This thesis aims to develop an integrated model designed to explain and predict consumers' behavioural intention(BI) and continued use (CU) of wearable fitness technology (WFT) to identify the determinants of the CU of technology. Because of this, the question arises as to whether there are differences between technology adoption and post-adoption (CU) factors. Design/methodology/approach- The study employs the unified theory of acceptance and use of technology2 (UTAUT2), which has the best explanatory power, as an underpinning framework—extending it with further factors, along with user-specific personal characteristics as moderators. All items will be adapted from previous literature and slightly modified according to the WFT/SW context. A longitudinal investigation will be carried out to examine the research model, wherein a survey will include these constructs involved in the conceptual model. A quantitative approach based on a questionnaire survey will collect data from existing wearable technology users. Data will be analysed using the structural equation modelling (SEM) method based on IBM SPSS statistics and AMOS 28.0. Findings- The research findings will provide unique perspectives on user behaviour, intention, and actual continuance usage when accepting WFT. Originality/value- Unlike previous works, the current thesis comprehensively explores factors that affect consumers' decisions to continue using wearable technology. That is influenced by technological/utilitarian, affective, emotional, psychological, and social factors, along with the role of proposed moderators. That novel research framework is proposed by extending the UTAUT2 model with additional contextual variables classified into Performance Expectancy, Effort Expectancy, Social Influence (societal pressure regarding body image), Facilitating Conditions, Hedonic Motivation (to be split up into two concepts: perceived enjoyment and perceived device annoyance), Price value, and Habit-forming techniques; adding technology upgradability as determinants of consumers' behavioural intention and continuance usage of Information Technology (IT). Further, using personality traits theory and proposing relevant user-specific personal characteristics (openness to technological innovativeness, conscientiousness in health, extraversion, neuroticism, and agreeableness) to moderate the research model. Thus, the present thesis obtains a more convincing explanation expected to provide theoretical foundations for future emerging IT (such as wearable fitness devices) research from a behavioural perspective.

Keywords: wearable technology, wearable fitness devices/smartwatches, continuance use, behavioural intention, upgradability, longitudinal study

Procedia PDF Downloads 114
81 From Intuitive to Constructive Audit Risk Assessment: A Complementary Approach to CAATTs Adoption

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

The use of the audit risk model in auditing has faced limitations and difficulties, leading auditors to rely on a conceptual level of its application. The qualitative approach to assessing risks has resulted in different risk assessments, affecting the quality of audits and decision-making on the adoption of CAATTs. This study aims to investigate risk factors impacting the implementation of the audit risk model and propose a complementary risk-based instrument (KRIs) to form substance risk judgments and mitigate against heightened risk of material misstatement (RMM). The study addresses the question of how risk factors impact the implementation of the audit risk model, improve risk judgments, and aid in the adoption of CAATTs. The study uses a three-stage scale development procedure involving a pretest and subsequent study with two independent samples. The pretest involves an exploratory factor analysis, while the subsequent study employs confirmatory factor analysis for construct validation. Additionally, the authors test the ability of the KRIs to predict audit efforts needed to mitigate against heightened RMM. Data was collected through two independent samples involving 767 participants. The collected data was analyzed using exploratory factor analysis and confirmatory factor analysis to assess scale validity and construct validation. The suggested KRIs, comprising two risk components and seventeen risk items, are found to have high predictive power in determining audit efforts needed to reduce RMM. The study validates the suggested KRIs as an effective instrument for risk assessment and decision-making on the adoption of CAATTs. This study contributes to the existing literature by implementing a holistic approach to risk assessment and providing a quantitative expression of assessed risks. It bridges the gap between intuitive risk evaluation and the theoretical domain, clarifying the mechanism of risk assessments. It also helps improve the uniformity and quality of risk assessments, aiding audit standard-setters in issuing updated guidelines on CAATT adoption. A few limitations and recommendations for future research should be mentioned. First, the process of developing the scale was conducted in the Israeli auditing market, which follows the International Standards on Auditing (ISAs). Although ISAs are adopted in European countries, for greater generalization, future studies could focus on other countries that adopt additional or local auditing standards. Second, this study revealed risk factors that have a material impact on the assessed risk. However, there could be additional risk factors that influence the assessment of the RMM. Therefore, future research could investigate other risk segments, such as operational and financial risks, to bring a broader generalizability to our results. Third, although the sample size in this study fits acceptable scale development procedures and enables drawing conclusions from the body of research, future research may develop standardized measures based on larger samples to reduce the generation of equivocal results and suggest an extended risk model.

Keywords: audit risk model, audit efforts, CAATTs adoption, key risk indicators, sustainability

Procedia PDF Downloads 77
80 Influence of Cryo-Grinding on Antioxidant Activity and Amount of Free Phenolic Acids, Rutin and Tyrosol in Whole Grain Buckwheat and Pumpkin Seed Cake

Authors: B. Voucko, M. Benkovic, N. Cukelj, S. Drakula, D. Novotni, S. Balbino, D. Curic

Abstract:

Oxidative stress is considered as one of the causes leading to metabolic disorders in humans. Therefore, the ability of antioxidants to inhibit free radical production is their primary role in the human organism. Antioxidants originating from cereals, especially flavonoids and polyphenols, are mostly bound and indigestible. Micronization damages the cell wall which consecutively results in bioactive material to be more accessible in vivo. In order to ensure complete fragmentation, micronization is often combined with high temperatures (e.g., for bran 200°C) which can lead to degradation of bioactive compounds. The innovative non-thermal technology of cryo-milling is an ultra-fine micronization method that uses liquid nitrogen (LN2) at a temperature of 195°C to freeze and cool the sample during milling. Freezing at such low temperatures causes the material to become brittle which ensures the generation of fine particles while preserving the bioactive content of the material. The aim of this research was to determine if production of ultra-fine material with cryo-milling will result in the augmentation of available bioactive compounds of buckwheat and pumpkin seed cake. For that reason, buckwheat and pumpkin seed cake were ground in a ball mill (CryoMill, Retch, Germany) with and without the use of LN2 for 8 minutes, in a 50 mL stainless steel jar containing one grinding ball (Ø 25 mm) at an oscillation frequency of 30 Hz. The cryo-milled samples were cooled with LN2 for 2 minutes prior to milling, followed by the first cycle of milling (4 minutes), intermediary cooling (2 minutes), and finally the second cycle of milling (further 4 minutes). A continuous process of milling was applied to the samples ground without freezing with LN2. Particle size distribution was determined using the Scirocco 2000 dry dispersion unit (Malvern Instruments, UK). Antioxidant activity was determined by 2,2-Diphenyl-1-picrylhydrazyl (DPPH) test and ferric reducing antioxidant power (FRAP) assay, while the total phenol content was determined using the Folin Ciocalteu method, using the ultraviolet-visible spectrophotometer (Specord 50 Plus, Germany). The content of the free phenolic acids, rutin in buckwheat, tyrosol in pumpkin seed cake, was determined with an HPLC-PDA method (Agilent 1200 series, Germany). Cryo-milling resulted in 11 times smaller size of buckwheat particles, and 3 times smaller size of pumpkin seed particles than milling without the use of LN2, but also, a lower uniformity of the particle size distribution. Lack of freezing during milling of pumpkin seed cake caused a formation of agglomerates due to its high-fat content (21 %). Cryo-milling caused augmentation of buckwheat flour antioxidant activity measured by DPPH test (23,9%) and an increase in available rutin content (14,5%). Also, it resulted in an augmentation of the total phenol content (36,9%) and available tyrosol content (12,5%) of pumpkin seed cake. Antioxidant activity measured with the FRAP test, as well as the content of phenolic acids remained unchanged independent of the milling process. The results of this study showed the potential of cryo-milling for complete raw material utilization in the food industry, as well as a tool for extraction of aimed bioactive components.

Keywords: bioactive, ball-mill, buckwheat, cryo-milling, pumpkin seed cake

Procedia PDF Downloads 132
79 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 60
78 A Postmodern Framework for Quranic Hermeneutics

Authors: Christiane Paulus

Abstract:

Post-Islamism assumes that the Quran should not be viewed in terms of what Lyotard identifies as a ‘meta-narrative'. However, its socio-ethical content can be viewed as critical of power discourse (Foucault). Practicing religion seems to be limited to rites and individual spirituality, taqwa. Alternatively, can we build on Muhammad Abduh's classic-modern reform and develop it through a postmodernist frame? This is the main question of this study. Through his general and vague remarks on the context of the Quran, Abduh was the first to refer to the historical and cultural distance of the text as an obstacle for interpretation. His application, however, corresponded to the modern absolute idea of authentic sharia. He was followed by Amin al-Khuli, who hermeneutically linked the content of the Quran to the theory of evolution. Fazlur Rahman and Nasr Hamid abu Zeid remain reluctant to go beyond the general level in terms of context. The hermeneutic circle, therefore, persists in challenging, how to get out to overcome one’s own assumptions. The insight into and the acceptance of the lasting ambivalence of understanding can be grasped as a postmodern approach; it is documented in Derrida's discovery of the shift in text meanings, difference, also in Lyotard's theory of différend. The resulting mixture of meanings (Wolfgang Welsch) can be read together with the classic ambiguity of the premodern interpreters of the Quran (Thomas Bauer). Confronting hermeneutic difficulties in general, Niklas Luhmann proves every description an attribution, tautology, i.e., remaining in the circle. ‘De-tautologization’ is possible, namely by analyzing the distinctions in the sense of objective, temporal and social information that every text contains. This could be expanded with the Kantian aesthetic dimension of reason (critique of pure judgment) corresponding to the iʽgaz of the Coran. Luhmann asks, ‘What distinction does the observer/author make?’ Quran as a speech from God to the first listeners could be seen as a discourse responding to the problems of everyday life of that time, which can be viewed as the general goal of the entire Qoran. Through reconstructing koranic Lifeworlds (Alfred Schütz) in detail, the social structure crystallizes the socio-economic differences, the enormous poverty. The koranic instruction to provide the basic needs for the neglected groups, which often intersect (old, poor, slaves, women, children), can be seen immediately in the text. First, the references to lifeworlds/social problems and discourses in longer koranic passages should be hypothesized. Subsequently, information from the classic commentaries could be extracted, the classical Tafseer, in particular, contains rich narrative material for reconstructing. By selecting and assigning suitable, specific context information, the meaning of the description becomes condensed (Clifford Geertz). In this manner, the text gets necessarily an alienation and is newly accessible. The socio-ethical implications can thus be grasped from the difference of the original problem and the revealed/improved order/procedure; this small step can be materialized as such, not as an absolute solution but as offering plausible patterns for today’s challenges as the Agenda 2030.

Keywords: postmodern hermeneutics, condensed description, sociological approach, small steps of reform

Procedia PDF Downloads 219
77 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 70
76 Sustainable Agricultural and Soil Water Management Practices in Relation to Climate Change and Disaster: A Himalayan Country Experience

Authors: Krishna Raj Regmi

Abstract:

A “Climate change adaptation and disaster risk management for sustainable agriculture” project was implemented in Nepal, a Himalayan country during 2008 to 2013 sponsored jointly by Food and Agriculture Organization (FAO) and United Nations Development Programme (UNDP), Nepal. The paper is based on the results and findings of this joint pilot project. The climate change events such as increased intensity of erratic rains in short spells, trend of prolonged drought, gradual rise in temperature in the higher elevations and occurrence of cold and hot waves in Terai (lower plains) has led to flash floods, massive erosion in the hills particularly in Churia range and drying of water sources. These recurring natural and climate-induced disasters are causing heavy damages through sedimentation and inundation of agricultural lands, crops, livestock, infrastructures and rural settlements in the downstream plains and thus reducing agriculture productivity and food security in the country. About 65% of the cultivated land in Nepal is rainfed with drought-prone characteristics and stabilization of agricultural production and productivity in these tracts will be possible through adoption of rainfed and drought-tolerant technologies as well as efficient soil-water management by the local communities. The adaptation and mitigation technologies and options identified by the project for soil erosion, flash floods and landslide control are on-farm watershed management, sloping land agriculture technologies (SALT), agro-forestry practices, agri-silvi-pastoral management, hedge-row contour planting, bio-engineering along slopes and river banks, plantation of multi-purpose trees and management of degraded waste land including sandy river-bed flood plains. The stress tolerant technologies with respect to drought, floods and temperature stress for efficient utilization of nutrient, soil, water and other resources for increased productivity are adoption of stress tolerant crop varieties and breeds of animals, indigenous proven technologies, mixed and inter-cropping systems, system of rice/wheat intensification (SRI), direct rice seeding, double transplanting of rice, off-season vegetable production and regular management of nurseries, orchards and animal sheds. The alternate energy use options and resource conservation practices for use by local communities are installation of bio-gas plants and clean stoves (Chulla range) for mitigation of green house gas (GHG) emissions, use of organic manures and bio-pesticides, jatropha cultivation, green manuring in rice fields and minimum/zero tillage practices for marshy lands. The efficient water management practices for increasing productivity of crops and livestock are use of micro-irrigation practices, construction of water conservation and water harvesting ponds, use of overhead water tanks and Thai jars for rain water harvesting and rehabilitation of on-farm irrigation systems. Initiation of some works on community-based early warning system, strengthening of met stations and disaster database management has made genuine efforts in providing disaster-tailored early warning, meteorological and insurance services to the local communities. Contingent planning is recommended to develop coping strategies and capacities of local communities to adopt necessary changes in the cropping patterns and practices in relation to adverse climatic and disaster risk conditions. At the end, adoption of awareness raising and capacity development activities (technical and institutional) and networking on climate-induced disaster and risks through training, visits and knowledge sharing workshops, dissemination of technical know-how and technologies, conduct of farmers' field schools, development of extension materials and their displays are being promoted. However, there is still need of strong coordination and linkage between agriculture, environment, forestry, meteorology, irrigation, climate-induced pro-active disaster preparedness and research at the ministry, department and district level for up-scaling, implementation and institutionalization of climate change and disaster risk management activities and adaptation mitigation options in agriculture for sustainable livelihoods of the communities.

Keywords: climate change adaptation, disaster risk management, soil-water management practices, sustainable agriculture

Procedia PDF Downloads 510
75 Development and Implementation of An "Electric Island" Monitoring Infrastructure for Promoting Energy Efficiency in Schools

Authors: Vladislav Grigorovitch, Marina Grigorovitch, David Pearlmutter, Erez Gal

Abstract:

The concept of “electric island” is involved with achieving the balance between the self-power generation ability of each educational institution and energy consumption demand. Photo-Voltaic (PV) solar system installed on the roofs of educational buildings is a common way to absorb the available solar energy and generate electricity for self-consumption and even for returning to the grid. The main objective of this research is to develop and implement an “electric island” monitoring infrastructure for promoting energy efficiency in educational buildings. A microscale monitoring methodology will be developed to provide a platform to estimate energy consumption performance classified by rooms and subspaces rather than the more common macroscale monitoring of the whole building. The monitoring platform will be established on the experimental sites, enabling an estimation and further analysis of the variety of environmental and physical conditions. For each building, separate measurement configurations will be applied taking into account the specific requirements, restrictions, location and infrastructure issues. The direct results of the measurements will be analyzed to provide deeper understanding of the impact of environmental conditions and sustainability construction standards, not only on the energy demand of public building, but also on the energy consumption habits of the children that study in those schools and the educational and administrative staff that is responsible for providing the thermal comfort conditions and healthy studying atmosphere for the children. A monitoring methodology being developed in this research is providing online access to real-time data of Interferential Therapy (IFTs) from any mobile phone or computer by simply browsing the dedicated website, providing powerful tools for policy makers for better decision making while developing PV production infrastructure to achieve “electric islands” in educational buildings. A detailed measurement configuration was technically designed based on the specific conditions and restriction of each of the pilot buildings. A monitoring and analysis methodology includes a large variety of environmental parameters inside and outside the schools to investigate the impact of environmental conditions both on the energy performance of the school and educational abilities of the children. Indoor measurements are mandatory to acquire the energy consumption data, temperature, humidity, carbon dioxide and other air quality conditions in different parts of the building. In addition to that, we aim to study the awareness of the users to the energy consideration and thus the impact on their energy consumption habits. The monitoring of outdoor conditions is vital for proper design of the off-grid energy supply system and validation of its sufficient capacity. The suggested outcomes of this research include: 1. both experimental sites are designed to have PV production and storage capabilities; 2. Developing an online information feedback platform. The platform will provide consumer dedicated information to academic researchers, municipality officials and educational staff and students; 3. Designing an environmental work path for educational staff regarding optimal conditions and efficient hours for operating air conditioning, natural ventilation, closing of blinds, etc.

Keywords: sustainability, electric island, IOT, smart building

Procedia PDF Downloads 179
74 Testing a Dose-Response Model of Intergenerational Transmission of Family Violence

Authors: Katherine Maurer

Abstract:

Background and purpose: Violence that occurs within families is a global social problem. Children who are victims or witness to family violence are at risk for many negative effects both proximally and distally. One of the most disconcerting long-term effects occurs when child victims become adult perpetrators: the intergenerational transmission of family violence (ITFV). Early identification of those children most at risk for ITFV is needed to inform interventions to prevent future family violence perpetration and victimization. Only about 25-30% of child family violence victims become perpetrators of adult family violence (either child abuse, partner abuse, or both). Prior research has primarily been conducted using dichotomous measures of exposure (yes; no) to predict ITFV, given the low incidence rate in community samples. It is often assumed that exposure to greater amounts of violence predicts greater risk of ITFV. However, no previous longitudinal study with a community sample has tested a dose-response model of exposure to physical child abuse and parental physical intimate partner violence (IPV) using count data of frequency and severity of violence to predict adult ITFV. The current study used advanced statistical methods to test if increased childhood exposure would predict greater risk of ITFV. Methods: The study utilized 3 panels of prospective data from a cohort of 15 year olds (N=338) from the Project on Human Development in Chicago Neighborhoods longitudinal study. The data were comprised of a stratified probability sample of seven ethnic/racial categories and three socio-economic status levels. Structural equation modeling was employed to test a hurdle regression model of dose-response to predict ITFV. A version of the Conflict Tactics Scale was used to measure physical violence victimization, witnessing parental IPV and young adult IPV perpetration and victimization. Results: Consistent with previous findings, past 12 months incidence rates severity and frequency of interpersonal violence were highly skewed. While rates of parental and young adult IPV were about 40%, an unusually high rate of physical child abuse (57%) was reported. The vast majority of a number of acts of violence, whether minor or severe, were in the 1-3 range in the past 12 months. Reported frequencies of more than 5 times in the past year were rare, with less than 10% of those reporting more than six acts of minor or severe physical violence. As expected, minor acts of violence were much more common than acts of severe violence. Overall, regression analyses were not significant for the dose-response model of ITFV. Conclusions and implications: The results of the dose-response model were not significant due to a lack of power in the final sample (N=338). Nonetheless, the value of the approach was confirmed for the future research given the bi-modal nature of the distributions which suggest that in the context of both child physical abuse and physical IPV, there are at least two classes when frequency of acts is considered. Taking frequency into account in predictive models may help to better understand the relationship of exposure to ITFV outcomes. Further testing using hurdle regression models is suggested.

Keywords: intergenerational transmission of family violence, physical child abuse, intimate partner violence, structural equation modeling

Procedia PDF Downloads 243
73 Enhancing Strategic Counter-Terrorism: Understanding How Familial Leadership Influences the Resilience of Terrorist and Insurgent Organizations in Asia

Authors: Andrew D. Henshaw

Abstract:

The research examines the influence of familial and kinship based leadership on the resilience of politically violent organizations. Organizations of this type frequently fight in the same conflicts though are called 'terrorist' or 'insurgent' depending on political foci of the time, and thus different approaches are used to combat them. The research considers them correlated phenomena with significant overlap and identifies strengths and vulnerabilities in resilience processes. The research employs paired case studies to examine resilience in organizations under significant external pressure, and achieves this by measuring three variables. 1: Organizational robustness in terms of leadership and governance. 2. Bounce-back response efficiency to external pressures and adaptation to endogenous and exogenous shock. 3. Perpetuity of operational and attack capability, and political legitimacy. The research makes three hypotheses. First, familial/kinship leadership groups have a significant effect on organizational resilience in terms of informal operations. Second, non-familial/kinship organizations suffer in terms of heightened security transaction costs and social economics surrounding recruitment, retention, and replacement. Third, resilience in non-familial organizations likely stems from critical external supports like state sponsorship or powerful patrons, rather than organic resilience dynamics. The case studies pair familial organizations with non-familial organizations. Set 1: The Haqqani Network (HQN) - Pair: Lashkar-e-Toiba (LeT). Set 2: Jemaah Islamiyah (JI) - Pair: The Abu Sayyaf Group (ASG). Case studies were selected based on three requirements, being: contrasting governance types, exposure to significant external pressures and, geographical similarity. The case study sets were examined over 24 months following periods of significantly heightened operational activities. This enabled empirical measurement of the variables as substantial external pressures came into force. The rationale for the research is obvious. Nearly all organizations have some nexus of familial interconnectedness. Examining familial leadership networks does not provide further understanding of how terrorism and insurgency originate, however, the central focus of the research does address how they persist. The sparse attention to this in existing literature presents an unexplored yet important area of security studies. Furthermore, social capital in familial systems is largely automatic and organic, given at birth or through kinship. It reduces security vetting cost for recruits, fighters and supporters which lowers liabilities and entry costs, while raising organizational efficiency and exit costs. Better understanding of these process is needed to exploit strengths into weaknesses. Outcomes and implications of the research have critical relevance to future operational policy development. Increased clarity of internal trust dynamics, social capital and power flows are essential to fracturing and manipulating kinship nexus. This is highly valuable to external pressure mechanisms such as counter-terrorism, counterinsurgency, and strategic intelligence methods to penetrate, manipulate, degrade or destroy the resilience of politically violent organizations.

Keywords: Counterinsurgency (COIN), counter-terrorism, familial influence, insurgency, intelligence, kinship, resilience, terrorism

Procedia PDF Downloads 313
72 Combination of Modelling and Environmental Life Cycle Assessment Approach for Demand Driven Biogas Production

Authors: Juan A. Arzate, Funda C. Ertem, M. Nicolas Cruz-Bournazou, Peter Neubauer, Stefan Junne

Abstract:

— One of the biggest challenges the world faces today is global warming that is caused by greenhouse gases (GHGs) coming from the combustion of fossil fuels for energy generation. In order to mitigate climate change, the European Union has committed to reducing GHG emissions to 80–95% below the level of the 1990s by the year 2050. Renewable technologies are vital to diminish energy-related GHG emissions. Since water and biomass are limited resources, the largest contributions to renewable energy (RE) systems will have to come from wind and solar power. Nevertheless, high proportions of fluctuating RE will present a number of challenges, especially regarding the need to balance the variable energy demand with the weather dependent fluctuation of energy supply. Therefore, biogas plants in this content would play an important role, since they are easily adaptable. Feedstock availability varies locally or seasonally; however there is a lack of knowledge in how biogas plants should be operated in a stable manner by local feedstock. This problem may be prevented through suitable control strategies. Such strategies require the development of convenient mathematical models, which fairly describe the main processes. Modelling allows us to predict the system behavior of biogas plants when different feedstocks are used with different loading rates. Life cycle assessment (LCA) is a technique for analyzing several sides from evolution of a product till its disposal in an environmental point of view. It is highly recommend to use as a decision making tool. In order to achieve suitable strategies, the combination of a flexible energy generation provided by biogas plants, a secure production process and the maximization of the environmental benefits can be obtained by the combination of process modelling and LCA approaches. For this reason, this study focuses on the biogas plant which flexibly generates required energy from the co-digestion of maize, grass and cattle manure, while emitting the lowest amount of GHG´s. To achieve this goal AMOCO model was combined with LCA. The program was structured in Matlab to simulate any biogas process based on the AMOCO model and combined with the equations necessary to obtain climate change, acidification and eutrophication potentials of the whole production system based on ReCiPe midpoint v.1.06 methodology. Developed simulation was optimized based on real data from operating biogas plants and existing literature research. The results prove that AMOCO model can successfully imitate the system behavior of biogas plants and the necessary time required for the process to adapt in order to generate demanded energy from available feedstock. Combination with LCA approach provided opportunity to keep the resulting emissions from operation at the lowest possible level. This would allow for a prediction of the process, when the feedstock utilization supports the establishment of closed material circles within a smart bio-production grid – under the constraint of minimal drawbacks for the environment and maximal sustainability.

Keywords: AMOCO model, GHG emissions, life cycle assessment, modelling

Procedia PDF Downloads 188