Search results for: conceptual domain of law
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2823

Search results for: conceptual domain of law

303 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

Authors: Nishant Shrivastava, D. K. Sehgal

Abstract:

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Keywords: finite elements, Lagrangian, optimal stress location, serendipity

Procedia PDF Downloads 89
302 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design

Authors: Emiliano Matta

Abstract:

Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.

Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers

Procedia PDF Downloads 120
301 Injunctions, Disjunctions, Remnants: The Reverse of Unity

Authors: Igor Guatelli

Abstract:

The universe of aesthetic perception entails impasses about sensitive divergences that each text or visual object may be subjected to. If approached through intertextuality that is not based on the misleading notion of kinships or similarities a priori admissible, the possibility of anachronistic, heterogeneous - and non-diachronic - assemblies can enhance the emergence of interval movements, intermediate, and conflicting, conducive to a method of reading, interpreting, and assigning meaning that escapes the rigid antinomies of the mere being and non-being of things. In negative, they operate in a relationship built by the lack of an adjusted meaning set by their positive existences, with no remainders; the generated interval becomes the remnant of each of them; it is the opening that obscures the stable positions of each one. Without the negative of absence, of that which is always missing or must be missing in a text, concept, or image made positive by history, nothing is perceived beyond what has been already given. Pairings or binary oppositions cannot lead only to functional syntheses; on the contrary, methodological disturbances accumulated by the approximation of signs and entities can initiate a process of becoming as an opening to an unforeseen other, transformation until a moment when the difficulties of [re]conciliation become the mainstay of a future of that sign/entity, not envisioned a priori. A counter-history can emerge from these unprecedented, misadjusted approaches, beginnings of unassigned injunctions and disjunctions, in short, difficult alliances that open cracks in a supposedly cohesive history, chained in its apparent linearity with no remains, understood as a categorical historical imperative. Interstices are minority fields that, because of their opening, are capable of causing opacity in that which, apparently, presents itself with irreducible clarity. Resulting from an incomplete and maladjusted [at the least dual] marriage between the signs/entities that originate them, this interval may destabilize and cause disorder in these entities and their own meanings. The interstitials offer a hyphenated relationship: a simultaneous union and separation, a spacing between the entity’s identity and its otherness or, alterity. One and the other may no longer be seen without the crack or fissure that now separates them, uniting, by a space-time lapse. Ontological, semantic shifts are caused by this fissure, an absence between one and the other, one with and against the other. Based on an improbable approximation between some conceptual and semantic shifts within the design production of architect Rem Koolhaas and the textual production of the philosopher Jacques Derrida, this article questions the notion of unity, coherence, affinity, and complementarity in the process of construction of thought from these ontological, epistemological, and semiological fissures that rattle the signs/entities and their stable meanings. Fissures in a thought that is considered coherent, cohesive, formatted are the negativity that constitutes the interstices that allow us to move towards what still remains as non-identity, which allows us to begin another story.

Keywords: clearing, interstice, negative, remnant, spectrum

Procedia PDF Downloads 115
300 Listening to Circles, Playing Lights: A Study of Cross-Modal Perception in Music

Authors: Roni Granot, Erica Polini

Abstract:

Music is often described in terms of non-auditory adjectives such as a rising melody, a bright sound, or a zigzagged contour. Such cross modal associations have been studied with simple isolated musical parameters, but only rarely in rich musical contexts. The current study probes cross sensory associations with polarity based dimensions by means of pairings of 10 adjectives: blunt-sharp, relaxed-tense, heavy-light, low (in space)-high, low (pitch)-high, big-small, hard-soft, active-passive, bright-dark, sad-happy. 30 participants (randomly assigned to one of two groups) were asked to rate one of 27 short saxophone improvisations on a 1 to 6 scale where 1 and six correspond to the opposite pole of each dimension. The 27 improvisations included three exemplars for each of three dimensions (size, brightness, sharpness), played by three different players. Here we focus on the question of whether ratings of scales corresponding with the musical dimension were consistently rated as such (e.g. music improvised to represent a white circle rated as bright in contrast with music improvised to represent a dark circle rated as dark). Overall the average scores by dimension showed an upward trend in the equivalent verbal scale, with a low rating for small, bright and sharp musical improvisations and higher scores for large, dark and blunt improvisations. Friedman tests indicate a statistically significant difference for brightness (χ2 (2) = 19.704, p = .000) and sharpness dimensions (χ2 (2) = 15.750, p = .000), but not for size (χ2 (2) = 1.444, p = .486). Post hoc analysis with Wilcoxon signed-rank tests within the brightness dimension, show significant differences among all possible parings resulted in significant differences: the rankings of 'bright' and 'dark' (Z = -3.310, p = .001), of 'bright' and 'medium' (Z = -2.438, p = .015) and of 'dark' and 'medium' music (Z = -2.714, p = .007); but only differences between the extreme contrasts within the sharpness dimension : 'sharp' and 'blunt' music (Z = -3.147, p = .002) and between 'sharp' and 'medium' music rated on the sharpness scale (Z = - 3.054, p = .002), but not between 'medium' and 'blunt' music (Z = -.982, p = .326). In summary our study suggests a privileged link between music and the perceptual and semantic domain of brightness. In contrast, size seems to be very difficult to convey in music, whereas sharpness seems to be mapped onto the two extremes (sharp vs. blunt) rather than continuously. This is nicely reflected in the musical literature in titles and texts which stress the association between music and concepts of light or darkness rather than sharpness or size.

Keywords: audiovisual, brightness, cross-modal perception, cross-sensory correspondences, size, visual angularity

Procedia PDF Downloads 195
299 Cross Cultural Adaptation and Content Validation of the Assessment Instrument Preschooler Awareness of Stuttering Survey

Authors: Catarina Belchior, Catarina Martins, Sara Mendes, Ana Rita S. Valente, Elsa Marta Soares

Abstract:

Introduction: The negative feelings and attitudes that a person who stutters can develop are extremely relevant when considering assessment and intervention in Speech and Language Therapy. This relates to the fact that the person who stutters can experience feelings such as shame, fear and negative beliefs when communicating. Considering the complexity and importance of integrating diverse aspects in stuttering intervention, it is central to identify those emotions as early as possible. Therefore, this research aimed to achieve the translation, adaptation to European Portuguese and to analyze the content validation of the Preschooler Awareness Stuttering Survey (Abbiati, Guitar & Hutchins, 2015), an instrument that allows the assessment of the impact of stuttering on preschool children who stutter considering feelings and attitudes. Methodology: Cross-sectional descriptive qualitative research. The following methodological procedures were followed: translation, back-translation, panel of experts and pilot study. This abstract describes the results of the first three phases of this process. The translation was accomplished by two Speech Language Therapists (SLT). Both professionals have more than five years of experience and are users of English language. One of them has a broad experience in the field of stuttering. Back-translation was conducted by two bilingual individuals without experience in health or any knowledge about the instrument. The panel of experts was composed by 3 different SLT, experts in the field of stuttering. Results and Discussion: In the translation and back-translation process it was possible to verify differences in semantic and idiomatic equivalences of several concepts and expressions, as well as the need to include new information to enhance the understanding of the application of the instrument. The meeting between the two translators and the researchers allowed the achievement of a consensus version that was used in back-translation. Considering adaptation and content validation, the main change made by the experts was the conceptual equivalence of the questions and answers of the instrument's sheets. Considering that in the translated consensus version the questions began with various nouns such as 'is' or 'the cow' and that the answers did not contain the adverb 'much' as in the original instrument, the panel agreed that it would be more appropriate if the questions all started with 'how' and that all the answers should present the adverb 'much'. This decision was made to ensure that the translate instrument would be similar to the original and so that the results obtained could be comparable between the original and the translated instrument. There was also elaborated one semantic equivalence between concepts. The panel of experts found that all other items and specificities of the instrument were adequate, concluding the adequacy of the instrument considering its objectives and its intended target population. Conclusion: This research aspires to diversify the existing validated resources in this scope, adding a new instrument that allows the assessment of preschool children who stutter. Consequently, it is hoped that this instrument will provide a real and reliable assessment that can lead to an appropriate therapeutic intervention according to the characteristics and needs of each child.

Keywords: stuttering, assessment, feelings and attitudes, speech language therapy

Procedia PDF Downloads 122
298 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong

Abstract:

This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.

Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 185
297 Nanoimprinted-Block Copolymer-Based Porous Nanocone Substrate for SERS Enhancement

Authors: Yunha Ryu, Kyoungsik Kim

Abstract:

Raman spectroscopy is one of the most powerful techniques for chemical detection, but the low sensitivity originated from the extremely small cross-section of the Raman scattering limits the practical use of Raman spectroscopy. To overcome this problem, Surface Enhanced Raman Scattering (SERS) has been intensively studied for several decades. Because the SERS effect is mainly induced from strong electromagnetic near-field enhancement as a result of localized surface plasmon resonance of metallic nanostructures, it is important to design the plasmonic structures with high density of electromagnetic hot spots for SERS substrate. One of the useful fabrication methods is using porous nanomaterial as a template for metallic structure. Internal pores on a scale of tens of nanometers can be strong EM hotspots by confining the incident light. Also, porous structures can capture more target molecules than non-porous structures in a same detection spot thanks to the large surface area. Herein we report the facile fabrication method of porous SERS substrate by integrating solvent-assisted nanoimprint lithography and selective etching of block copolymer. We obtained nanostructures with high porosity via simple selective etching of the one microdomain of the diblock copolymer. Furthermore, we imprinted of the nanocone patterns into the spin-coated flat block copolymer film to make three-dimensional SERS substrate for the high density of SERS hot spots as well as large surface area. We used solvent-assisted nanoimprint lithography (SAIL) to reduce the fabrication time and cost for patterning BCP film by taking advantage of a solvent which dissolves both polystyrenre and poly(methyl methacrylate) domain of the block copolymer, and thus block copolymer film was molded under the low temperature and atmospheric pressure in a short time. After Ag deposition, we measured Raman intensity of dye molecules adsorbed on the fabricated structure. Compared to the Raman signals of Ag coated solid nanocone, porous nanocone showed 10 times higher Raman intensity at 1510 cm(-1) band. In conclusion, we fabricated porous metallic nanocone arrays with high density electromagnetic hotspots by templating nanoimprinted diblock copolymer with selective etching and demonstrated its capability as an effective SERS substrate.

Keywords: block copolymer, porous nanostructure, solvent-assisted nanoimprint, surface-enhanced Raman spectroscopy

Procedia PDF Downloads 599
296 Patterns and Predictors of Intended Service Use among Frail Older Adults in Urban China

Authors: Yuanyuan Fu

Abstract:

Background and Purpose: Along with the change of society and economy, the traditional home function of old people has gradually weakened in the contemporary China. Acknowledging these situations, to better meet old people’s needs on formal services and improve the quality of later life, this study seeks to identify patterns of intended service use among frail old people living in the communities and examined determinants that explain heterogeneous variations in old people’s intended service use patterns. Additionally, this study also tested the relationship between culture value and intended service use patterns and the mediating role of enabling factors in terms of culture value and intended service use patterns. Methods:Participants were recruited from Haidian District, Beijing, China in 2015. The multi-stage sampling method was adopted to select sub-districts, communities and old people aged 70 years old or older. After screening, 577 old people with limitations in daily life, were successfully interviewed. After data cleaning, 550 samples were included for data analysis. This study establishes a conceptual framework based on the Anderson Model (including predisposing factors, enabling factors and need factors), and further developed it by adding culture value factors (including attitudes towards filial piety and attitudes towards social face). Using a latent class analysis (LCA), this study classifies overall patterns of old people’s formal service utilization. Fourteen types of formal services were taken into account, including housework, voluntary support, transportation, home-delivered meals, and home-delivery medical care, elderly’s canteen and day-care center/respite care and so on. Structural equation modeling (SEM) was used to examine the direct effect of culture value on service use pattern, and the mediating effect of the enabling factors. Results: The LCA classified a hierarchical structure of service use patterns: multiple intended service use (N=69, 23%), selective intended service use (N=129, 23%), and light intended service use (N=352, 64%). Through SEM, after controlling predisposing factors and need factors, the results showed the significant direct effect of culture value on older people’s intended service use patterns. Enabling factors had a partial mediation effect on the relationship between culture value and the patterns. Conclusions and Implications: Differentiation of formal services may be important for meeting frail old people’s service needs and distributing program resources by identifying target populations for intervention, which may make reference to specific interventions to better support frail old people. Additionally, culture value had a unique direct effect on the intended service use patterns of frail old people in China, enriching our theoretical understanding of sources of culture value and their impacts. The findings also highlighted the mediation effects of enabling factors on the relationship between culture value factors and intended service use patterns. This study suggests that researchers and service providers should pay more attention to the important role of culture value factors in contributing to intended service use patterns and also be more sensitive to the mediating effect of enabling factors when discussing the relationship between culture value and the patterns.

Keywords: frail old people, intended service use pattern, culture value, enabling factors, contemporary China, latent class analysis

Procedia PDF Downloads 207
295 Numerical Modeling and Experimental Analysis of a Pallet Isolation Device to Protect Selective Type Industrial Storage Racks

Authors: Marcelo Sanhueza Cartes, Nelson Maureira Carsalade

Abstract:

This research evaluates the effectiveness of a pallet isolation device for the protection of selective-type industrial storage racks. The device works only in the longitudinal direction of the aisle, and it is made up of a platform installed on the rack beams. At both ends, the platform is connected to the rack structure by means of a spring-damper system working in parallel. A system of wheels is arranged between the isolation platform and the rack beams in order to reduce friction, decoupling of the movement and improve the effectiveness of the device. The latter is evaluated by the reduction of the maximum dynamic responses of basal shear load and story drift in relation to those corresponding to the same rack with the traditional construction system. In the first stage, numerical simulations of industrial storage racks were carried out with and without the pallet isolation device. The numerical results allowed us to identify the archetypes in which it would be more appropriate to carry out experimental tests, thus limiting the number of trials. In the second stage, experimental tests were carried out on a shaking table to a select group of full-scale racks with and without the proposed device. The movement simulated by the shaking table was based on the Mw 8.8 magnitude earthquake of February 27, 2010, in Chile, registered at the San Pedro de la Paz station. The peak ground acceleration (PGA) was scaled in the frequency domain to fit its response spectrum with the design spectrum of NCh433. The experimental setup contemplates the installation of sensors to measure relative displacement and absolute acceleration. The movement of the shaking table with respect to the ground, the inter-story drift of the rack and the pallets with respect to the rack structure were recorded. Accelerometers redundantly measured all of the above in order to corroborate measurements and adequately capture low and high-frequency vibrations, whereas displacement and acceleration sensors are respectively more reliable. The numerical and experimental results allowed us to identify that the pallet isolation period is the variable with the greatest influence on the dynamic responses considered. It was also possible to identify that the proposed device significantly reduces both the basal cut and the maximum inter-story drift by up to one order of magnitude.

Keywords: pallet isolation system, industrial storage racks, basal shear load, interstory drift.

Procedia PDF Downloads 54
294 Tuning the Emission Colour of Phenothiazine by Introduction of Withdrawing Electron Groups

Authors: Andrei Bejan, Luminita Marin, Dalila Belei

Abstract:

Phenothiazine with electron-rich nitrogen and sulfur heteroatoms has a high electron-donating ability which promotes a good conjugation and therefore low band-gap with consequences upon charge carrier mobility improving and shifting of light emission in visible domain. Moreover, its non-planar butterfly conformation inhibits molecular aggregation and thus preserves quite well the fluorescence quantum yield in solid state compared to solution. Therefore phenothiazine and its derivatives are promising hole transport materials for use in organic electronic and optoelectronic devices as light emitting diodes, photovoltaic cells, integrated circuit sensors or driving circuits for large area display devices. The objective of this paper was to obtain a series of new phenothiazine derivatives by introduction of different electron withdrawing substituents as formyl, carboxyl and cyanoacryl units in order to create a push pull system which has potential to improve the electronic and optical properties. Bromine atom was used as electrono-donor moiety to extend furthermore the existing conjugation. The understudy compounds were structural characterized by FTIR and 1H-NMR spectroscopy and single crystal X-ray diffraction. Besides, the single crystal X-ray diffraction brought information regarding the supramolecular architecture of the compounds. Photophysical properties were monitored by UV-vis and photoluminescence spectroscopy, while the electrochemical behavior was established by cyclic voltammetry. The absorption maxima of the studied compounds vary in a large range (322-455 nm), reflecting the different electronic delocalization degree, depending by the substituent nature. In a similar manner, the emission spectra reveal different color of emitted light, a red shift being evident for the groups with higher electron withdrawing ability. The emitted light is pure and saturated for the compounds containing strong withdrawing formyl or cyanoacryl units and reach the highest quantum yield of 71% for the compound containing bromine and cyanoacrilic units. Electrochemical study show reversible oxidative and reduction processes for all the compounds and a close correlation of the HOMO-LUMO band gap with substituent nature. All these findings suggest the obtained compounds as promising materials for optoelectronic devices.

Keywords: electrochemical properties, phenothiazine derivatives, photoluminescence, quantum yield

Procedia PDF Downloads 309
293 Criminal Law and Internet of Things: Challenges and Threats

Authors: Celina Nowak

Abstract:

The development of information and communication technologies (ICT) and a consequent growth of cyberspace have become a reality of modern societies. The newest addition to this complex structure has been Internet of Things which is due to the appearance of smart devices. IoT creates a new dimension of the network, as the communication is no longer the domain of just humans, but has also become possible between devices themselves. The possibility of communication between devices, devoid of human intervention and real-time supervision, generated new societal and legal challenges. Some of them may and certainly will eventually be connected to criminal law. Legislators both on national and international level have been struggling to cope with this technologically evolving environment in order to address new threats created by the ICT. There are legal instruments on cybercrime, however imperfect and not of universal scope, sometimes referring to specific types of prohibited behaviors undertaken by criminals, such as money laundering, sex offences. However, the criminal law seems largely not prepared to the challenges which may arise because of the development of IoT. This is largely due to the fact that criminal law, both on national and international level, is still based on the concept of perpetration of an offence by a human being. This is a traditional approach, historically and factually justified. Over time, some legal systems have developed or accepted the possibility of commission of an offence by a corporation, a legal person. This is in fact a legal fiction, as a legal person cannot commit an offence as such, it needs humans to actually behave in a certain way on its behalf. Yet, the legislators have come to understand that corporations have their own interests and may benefit from crime – and therefore need to be penalized. This realization however has not been welcome by all states and still give rise to doubts of ontological and theoretical nature in many legal systems. For this reason, in many legislations the liability of legal persons for commission of an offence has not been recognized as criminal responsibility. With the technological progress and the growing use of IoT the discussions referring to criminal responsibility of corporations seem rather inadequate. The world is now facing new challenges and new threats related to the ‘smart’ things. They will have to be eventually addressed by legislators if they want to, as they should, to keep up with the pace of technological and societal evolution. This will however require a reevaluation and possibly restructuring of the most fundamental notions of modern criminal law, such as perpetration, guilt, participation in crime. It remains unclear at this point what norms and legal concepts will be and may be established. The main goal of the research is to point out to the challenges ahead of the national and international legislators in the said context and to attempt to formulate some indications as to the directions of changes, having in mind serious threats related to privacy and security related to the use of IoT.

Keywords: criminal law, internet of things, privacy, security threats

Procedia PDF Downloads 133
292 A Comparative Analysis of an All-Optical Switch Using Chalcogenide Glass and Gallium Arsenide Based on Nonlinear Photonic Crystal

Authors: Priyanka Kumari Gupta, Punya Prasanna Paltani, Shrivishal Tripathi

Abstract:

This paper proposes a nonlinear photonic crystal ring resonator-based all-optical 2 × 2 switch. The nonlinear Kerr effect is used to evaluate the essential 2 x 2 components of the photonic crystal-based optical switch, including the bar and cross states. The photonic crystal comprises a two-dimensional square lattice of dielectric rods in an air background. In the background air, two different dielectric materials are used for this comparison study separately. Initially with chalcogenide glass rods, then with GaAs rods. For both materials, the operating wavelength, bandgap diagram, operating power intensities, and performance parameters, such as the extinction ratio, insertion loss, and cross-talk of an optical switch, have also been estimated using the plane wave expansion and the finite-difference time-domain method. The chalcogenide glass material (Ag20As32Se48) has a high refractive index of 3.1 which is highly suitable for switching operations. This dielectric material is immersed in an air background with a nonlinear Kerr coefficient of 9.1 x 10-17 m2/W. The resonance wavelength is at 1552 nm, with the operating power intensities at the cross-state and bar state around 60 W/μm2 and 690 W/μm2. The extinction ratio, insertion loss, and cross-talk value for the chalcogenide glass at the cross-state are 17.19 dB, 0.051 dB, and -17.14 dB, and the bar state, the values are 11.32 dB, 0.025 dB, and -11.35 dB respectively. The gallium arsenide (GaAs) dielectric material has a high refractive index of 3.4, a direct bandgap semiconductor material highly preferred nowadays for switching operations. This dielectric material is immersed in an air background with a nonlinear Kerr coefficient of 3.1 x 10-16 m2/W. The resonance wavelength is at 1558 nm, with the operating power intensities at the cross-state and bar state around 110 W/μm2 and 200 W/μm2. The extinction ratio, insertion loss, and cross-talk value for the chalcogenide glass at the cross-state are found to be 3.36.19 dB, 2.436 dB, and -5.8 dB, and for the bar state, the values are 15.60 dB, 0.985 dB, and -16.59 dB respectively. This paper proposes an all-optical 2 × 2 switch based on a nonlinear photonic crystal using a ring resonator. The two-dimensional photonic crystal comprises a square lattice of dielectric rods in an air background. The resonance wavelength is in the range of photonic bandgap. Later, another widely used material, GaAs, is also considered, and its performance is compared with the chalcogenide glass. Our presented structure can be potentially applicable in optical integration circuits and information processing.

Keywords: photonic crystal, FDTD, ring resonator, optical switch

Procedia PDF Downloads 60
291 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 159
290 Allylation of Active Methylene Compounds with Cyclic Baylis-Hillman Alcohols: Why Is It Direct and Not Conjugate?

Authors: Karim Hrratha, Khaled Essalahb, Christophe Morellc, Henry Chermettec, Salima Boughdiria

Abstract:

Among the carbon-carbon bond formation types, allylation of active methylene compounds with cyclic Baylis-Hillman (BH) alcohols is a reliable and widely used method. This reaction is a very attractive tool in organic synthesis of biological and biodiesel compounds. Thus, in view of an insistent and peremptory request for an efficient and straightly method for synthesizing the desired product, a thorough analysis of various aspects of the reaction processes is an important task. The product afforded by the reaction of active methylene with BH alcohols depends largely on the experimental conditions, notably on the catalyst properties. All experiments reported that catalysis is needed for this reaction type because of the poor ability of alcohol hydroxyl group to be as a suitable leaving group. Within the catalysts, several transition- metal based have been used such as palladium in the presence of acid or base and have been considered as reliable methods. Furthemore, acid catalysts such as BF3.OEt2, BiX3 (X= Cl, Br, I, (OTf)3), InCl3, Yb(OTf)3, FeCl3, p-TsOH and H-montmorillonite have been employed to activate the C-C bond formation through the alkylation of active methylene compounds. Interestingly a report of a smoothly process for the ability of 4-imethyaminopyridine(DMAP) to catalyze the allylation reaction of active methylene compounds with cyclic Baylis-Hillman (BH) alcohol appeared recently. However, the reaction mechanism remains ambiguous, since the C- allylation process leads to an unexpected product (noted P1), corresponding to a direct allylation instead of conjugate allylation, which involves the most electrophilic center according to the electron withdrawing group CO effect. The main objective of the present theoretical study is to better understand the role of the DMAP catalytic activity as well as the process leading to the end- product (P1) for the catalytic reaction of a cyclic BH alcohol with active methylene compounds. For that purpose, we have carried out computations of a set of active methylene compounds varying by R1 and R2 toward the same alcohol, and we have attempted to rationalize the mechanisms thanks to the acid–base approach, and conceptual DFT tools such as chemical potential, hardness, Fukui functions, electrophilicity index and dual descriptor, as these approaches have shown a good prediction of reactions products.The present work is then organized as follows: In a first part some computational details will be given, introducing the reactivity indexes used in the present work, then Section 3 is dedicated to the discussion of the prediction of the selectivity and regioselectivity. The paper ends with some concluding remarks. In this work, we have shown, through DFT method at the B3LYP/6-311++G(d,p) level of theory that: The allylation of active methylene compounds with cyclic BH alcohol is governed by orbital control character. Hence the end- product denoted P1 is generated by direct allylation.

Keywords: DFT calculation, gas phase pKa, theoretical mechanism, orbital control, charge control, Fukui function, transition state

Procedia PDF Downloads 281
289 Corporate Social Responsibility and Corporate Reputation: A Bibliometric Analysis

Authors: Songdi Li, Louise Spry, Tony Woodall

Abstract:

Nowadays, Corporate Social responsibility (CSR) is becoming a buzz word, and more and more academics are putting efforts on CSR studies. It is believed that CSR could influence Corporate Reputation (CR), and they hold a favourable view that CSR leads to a positive CR. To be specific, the CSR related activities in the reputational context have been regarded as ways that associate to excellent financial performance, value creation, etc. Also, it is argued that CSR and CR are two sides of one coin; hence, to some extent, doing CSR is equal to establishing a good reputation. Still, there is no consensus of the CSR-CR relationship in the literature; thus, a systematic literature review is highly in need. This research conducts a systematic literature review with both bibliometric and content analysis. Data are selected from English language sources, and academic journal articles only, then, keyword combinations are applied to identify relevant sources. Data from Scopus and WoS are gathered for bibliometric analysis. Scopus search results were saved in RIS and CSV formats, and Web of Science (WoS) data were saved in TXT format and CSV formats in order to process data in the Bibexcel software for further analysis which later will be visualised by the software VOSviewer. Also, content analysis was applied to analyse the data clusters and the key articles. In terms of the topic of CSR-CR, this literature review with bibliometric analysis has made four achievements. First, this paper has developed a systematic study which quantitatively depicts the knowledge structure of CSR and CR by identifying terms closely related to CSR-CR (such as ‘corporate governance’) and clustering subtopics emerged in co-citation analysis. Second, content analysis is performed to acquire insight on the findings of bibliometric analysis in the discussion section. And it highlights some insightful implications for the future research agenda, for example, a psychological link between CSR-CR is identified from the result; also, emerging economies and qualitative research methods are new elements emerged in the CSR-CR big picture. Third, a multidisciplinary perspective presents through the whole bibliometric analysis mapping and co-word and co-citation analysis; hence, this work builds a structure of interdisciplinary perspective which potentially leads to an integrated conceptual framework in the future. Finally, Scopus and WoS are compared and contrasted in this paper; as a result, Scopus which has more depth and comprehensive data is suggested as a tool for future bibliometric analysis studies. Overall, this paper has fulfilled its initial purposes and contributed to the literature. To the author’s best knowledge, this paper conducted the first literature review of CSR-CR researches that applied both bibliometric analysis and content analysis; therefore, this paper achieves its methodological originality. And this dual approach brings advantages of carrying out a comprehensive and semantic exploration in the area of CSR-CR in a scientific and realistic method. Admittedly, its work might exist subjective bias in terms of search terms selection and paper selection; hence triangulation could reduce the subjective bias to some degree.

Keywords: corporate social responsibility, corporate reputation, bibliometric analysis, software program

Procedia PDF Downloads 107
288 Reconceptualizing Evidence and Evidence Types for Digital Journalism Studies

Authors: Hai L. Tran

Abstract:

In the digital age, evidence-based reporting is touted as a best practice for seeking the truth and keeping the public well-informed. Journalists are expected to rely on evidence to demonstrate the validity of a factual statement and lend credence to an individual account. Evidence can be obtained from various sources, and due to a rich supply of evidence types available, the definition of this important concept varies semantically. To promote clarity and understanding, it is necessary to break down the various types of evidence and categorize them in a more coherent, systematic way. There is a wide array of devices that digital journalists deploy as proof to back up or refute a truth claim. Evidence can take various formats, including verbal and visual materials. Verbal evidence encompasses quotes, soundbites, talking heads, testimonies, voice recordings, anecdotes, and statistics communicated through written or spoken language. There are instances where evidence is simply non-verbal, such as when natural sounds are provided without any verbalized words. On the other hand, other language-free items exhibited in photos, video footage, data visualizations, infographics, and illustrations can serve as visual evidence. Moreover, there are different sources from which evidence can be cited. Supporting materials, such as public or leaked records and documents, data, research studies, surveys, polls, or reports compiled by governments, organizations, and other entities, are frequently included as informational evidence. Proof can also come from human sources via interviews, recorded conversations, public and private gatherings, or press conferences. Expert opinions, eye-witness insights, insider observations, and official statements are some of the common examples of testimonial evidence. Digital journalism studies tend to make broad references when comparing qualitative versus quantitative forms of evidence. Meanwhile, limited efforts are being undertaken to distinguish between sister terms, such as “data,” “statistical,” and “base-rate” on one side of the spectrum and “narrative,” “anecdotal,” and “exemplar” on the other. The present study seeks to develop the evidence taxonomy, which classifies evidence through the quantitative-qualitative juxtaposition and in a hierarchical order from broad to specific. According to this scheme, data, statistics, and base rate belong to the quantitative evidence group, whereas narrative, anecdote, and exemplar fall into the qualitative evidence group. Subsequently, the taxonomical classification arranges data versus narrative at the top of the hierarchy of types of evidence, followed by statistics versus anecdote and base rate versus exemplar. This research reiterates the central role of evidence in how journalists describe and explain social phenomena and issues. By defining the various types of evidence and delineating their logical connections it helps remove a significant degree of conceptual inconsistency, ambiguity, and confusion in digital journalism studies.

Keywords: evidence, evidence forms, evidence types, taxonomy

Procedia PDF Downloads 40
287 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 16
286 Analysis of the Strategic Value at the Usage of Green IT Application for the Organizational Product or Service in Order to Gain the Competitive Advantage; Case: E-Money of a Telecommunication Firm in Indonesia

Authors: I Putu Deny Arthawan Sugih Prabowo, Eko Nugroho, Rudy Hartanto

Abstract:

Known, Green IT is a concept about how to use the technology (IT) wisely, efficiently, and environmentally. However, it exists as the consequence of the rapid-growth of the technology (especially IT) currently. Not only for the environments, the usage of Green IT applications, e.g. Cloud Computing (Cloud Storage) and E-Money (E-Cash), also gives its benefits for the organizational business strategy (especially the organizational product/service strategy) in order to gain the organizational competitive advantage (to be the market leader). This paper takes the case at E-Money as a Value-Added Services (VAS) of a telecommunication firm (company) in Indonesia which it also competes with the competitors’ similar product (service). Although it has been a popular telecommunication firm’s product/service, but its strategic values for the organization (firm) is still unknown, and therefore, the aim of this paper is for analyzing its strategic values for gaining the organizational competitive advantage. However, in this paper, its strategic value analysis is viewed by how to assess (consider) its strategic benefits and also manage the challenges or risks of its implementation at the organization as an organizational product/service. Then the paper uses a research model for investigating the influences of both perceived risks and the organizational cultures to the usage of Green IT Application at the organization and also both the usage of Green IT Application at the organization and the threats-challenges of the organizational products/services to the competitive advantage of the organizational products/services. However, the paper uses the quantitative research method (collecting the information from the field respondents by using the research questionnaires) and then, the primary data is analyzed by both descriptive and inferential statistics. Also in this paper, SmartPLS is used for analyzing the primary data by the quantitative research method. Besides using the quantitative research method, the paper also uses the qualitative research method, such as interviewing the field respondent and/or directly field observation, for deeply confirming the quantitative research method’s analysis results at the certain domain, e.g. both organizational cultures and internal processes that support the usage of Green IT applications for the organizational product/service (E-Money in this paper case). However, the paper is still at an infant stage of in-progress research. Then the paper’s results may be used as a reference for the organization (firm or company) in developing the organizational business strategies, especially about the organizational product/service that relates to Green IT applications. Besides it, the paper may also be the future study, e.g. the influence of knowledge transfer about E-Money and/or other Green IT application-based products/services to the organizational service performance that relates to the product (service) in order to gain the competitive advantage.

Keywords: Green IT, competitive advantage, strategic value, organization (firm or company), organizational product (service)

Procedia PDF Downloads 288
285 A Microwave and Millimeter-Wave Transmit/Receive Switch Subsystem for Communication Systems

Authors: Donghyun Lee, Cam Nguyen

Abstract:

Multi-band systems offer a great deal of benefit in modern communication and radar systems. In particular, multi-band antenna-array radar systems with their extended frequency diversity provide numerous advantages in detection, identification, locating and tracking a wide range of targets, including enhanced detection coverage, accurate target location, reduced survey time and cost, increased resolution, improved reliability and target information. An accurate calibration is a critical issue in antenna array systems. The amplitude and phase errors in multi-band and multi-polarization antenna array transceivers result in inaccurate target detection, deteriorated resolution and reduced reliability. Furthermore, the digital beam former without the RF domain phase-shifting is less immune to unfiltered interference signals, which can lead to receiver saturation in array systems. Therefore, implementing integrated front-end architecture, which can support calibration function with low insertion and filtering function from the farthest end of an array transceiver is of great interest. We report a dual K/Ka-band T/R/Calibration switch module with quasi-elliptic dual-bandpass filtering function implementing a Q-enhanced metamaterial transmission line. A unique dual-band frequency response is incorporated in the reception and calibration path of the proposed switch module utilizing the composite right/left-handed meta material transmission line coupled with a Colpitts-style negative generation circuit. The fabricated fully integrated T/R/Calibration switch module in 0.18-μm BiCMOS technology exhibits insertion loss of 4.9-12.3 dB and isolation of more than 45 dB in the reception, transmission and calibration mode of operation. In the reception and calibration mode, the dual-band frequency response centered at 24.5 and 35 GHz exhibits out-of-band rejection of more than 30 dB compared to the pass bands below 10.5 GHz and above 59.5 GHz. The rejection between the pass bands reaches more than 50 dB. In all modes of operation, the IP1-dB is between 4 and 11 dBm. Acknowledgement: This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Keywords: microwaves, millimeter waves, T/R switch, wireless communications, wireless communications

Procedia PDF Downloads 142
284 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong

Abstract:

This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.

Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 214
283 The Protection of Artificial Intelligence (AI)-Generated Creative Works Through Authorship: A Comparative Analysis Between the UK and Nigerian Copyright Experience to Determine Lessons to Be Learnt from the UK

Authors: Esther Ekundayo

Abstract:

The nature of AI-generated works makes it difficult to identify an author. Although, some scholars have suggested that all the players involved in its creation should be allocated authorship according to their respective contribution. From the programmer who creates and designs the AI to the investor who finances the AI and to the user of the AI who most likely ends up creating the work in question. While others suggested that this issue may be resolved by the UK computer-generated works (CGW) provision under Section 9(3) of the Copyright Designs and Patents Act 1988. However, under the UK and Nigerian copyright law, only human-created works are recognised. This is usually assessed based on their originality. This simply means that the work must have been created as a result of its author’s creative and intellectual abilities and not copied. Such works are literary, dramatic, musical and artistic works and are those that have recently been a topic of discussion with regards to generative artificial intelligence (Generative AI). Unlike Nigeria, the UK CDPA recognises computer-generated works and vests its authorship with the human who made the necessary arrangement for its creation . However, making necessary arrangement in the case of Nova Productions Ltd v Mazooma Games Ltd was interpreted similarly to the traditional authorship principle, which requires the skills of the creator to prove originality. Although, some recommend that computer-generated works complicates this issue, and AI-generated works should enter the public domain as authorship cannot be allocated to AI itself. Additionally, the UKIPO recognising these issues in line with the growing AI trend in a public consultation launched in the year 2022, considered whether computer-generated works should be protected at all and why. If not, whether a new right with a different scope and term of protection should be introduced. However, it concluded that the issue of computer-generated works would be revisited as AI was still in its early stages. Conversely, due to the recent developments in this area with regards to Generative AI systems such as ChatGPT, Midjourney, DALL-E and AIVA, amongst others, which can produce human-like copyright creations, it is therefore important to examine the relevant issues which have the possibility of altering traditional copyright principles as we know it. Considering that the UK and Nigeria are both common law jurisdictions but with slightly differing approaches to this area, this research, therefore, seeks to answer the following questions by comparative analysis: 1)Who is the author of an AI-generated work? 2)Is the UK’s CGW provision worthy of emulation by the Nigerian law? 3) Would a sui generis law be capable of protecting AI-generated works and its author under both jurisdictions? This research further examines the possible barriers to the implementation of the new law in Nigeria, such as limited technical expertise and lack of awareness by the policymakers, amongst others.

Keywords: authorship, artificial intelligence (AI), generative ai, computer-generated works, copyright, technology

Procedia PDF Downloads 60
282 Quantifying the Aspect of ‘Imagining’ in the Map of Dialogical inquiry

Authors: Chua Si Wen Alicia, Marcus Goh Tian Xi, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee

Abstract:

In a world full of rapid changes, people often need a set of skills to help them navigate an ever-changing workscape. These skills, often known as “future-oriented skills,” include learning to learn, critical thinking, understanding multiple perspectives, and knowledge creation. Future-oriented skills are typically assumed to be domain-general, applicable to multiple domains, and can be cultivated through a learning approach called Dialogical Inquiry. Dialogical Inquiry is known for its benefits of making sense of multiple perspectives, encouraging critical thinking, and developing learner’s capability to learn. However, it currently exists as a quantitative tool, which makes it hard to track and compare learning processes over time. With these concerns, the present research aimed to develop and validate a quantitative tool for the Map of Dialogical Inquiry, focusing Imagining aspect of learning. The Imagining aspect four dimensions: 1) speculative/ look for alternatives, 2) risk taking/ break rules, 3) create/ design, and 4) vision/ imagine. To do so, an exploratory literature review was conducted to better understand the dimensions of Imagining. This included deep-diving into the history of the creation of the Map of Dialogical Inquiry and a review on how “Imagining” has been conceptually defined in the field of social psychology, education, and beyond. Then, we synthesised and validated scales. These scales measured the dimension of Imagination and related concepts like creativity, divergent thinking regulatory focus, and instrumental risk. Thereafter, items were adapted from the aforementioned procured scales to form items that would contribute to the preliminary version of the Imagining Scale. For scale validation, 250 participants were recruited. A Confirmatory Factor Analysis (CFA) sought to establish dimensionality of the Imagining Scale with an iterative procedure in item removal. Reliability and validity of the scale’s dimensions were sought through measurements of Cronbach’s alpha, convergent validity, and discriminant validity. While CFA found that the distinction of Imagining’s four dimensions could not be validated, the scale was able to establish high reliability with a Cronbach alpha of .96. In addition, the convergent validity of the Imagining scale was established. A lack of strong discriminant validity may point to overlaps with other components of the Dialogical Map as a measure of learning. Thus, a holistic approach to forming the tool – encompassing all eight different components may be preferable.

Keywords: learning, education, imagining, pedagogy, dialogical teaching

Procedia PDF Downloads 68
281 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 112
280 A Sociological Study of the Potential Role of Retired Soldiers in the Post War Development and Reconstruction in Sri Lanka

Authors: Amunupura Kiriwandeiye Gedara, Asintha Saminda Gnanaratne

Abstract:

The security forces can be described as a workforce that goes beyond the role of ensuring the national security and contributes to the development process of the country. Soldiers are following combatant training courses during their tenure, they are equipped with a variety of vocational training courses to satisfy the needs of the army, to equip them with vocational training capabilities to achieve the development and reconstruction goals of the country as well as for the betterment of society in the event of emergencies. But with retirement, their relationship with the military is severed, and they are responsible for the future of their lives. The main purpose of this study was to examine how such professional capabilities can contribute to the development of the country, the current socio-economic status of the retired soldiers, and the current application of the vocational training skills they have mastered in the army to develop and rebuild the country in an effective manner. After analyzing the available research literature related to this field, a conceptual framework was developed and according to qualitative research methodology, and data obtained from Case studies and interviews are analyzed by using thematic analysis. Factors influencing early retirement include a lack of understanding of benefits, delays in promotions, not being properly evaluated for work, getting married on hasty decisions, and not having enough time to spend on family and household chores. Most of the soldiers are not aware about various programs and benefits available to retirees. They do not have a satisfactory attitude towards the retirement guidance they receive from the army at the time of retirement. Also, due to the lack of understanding about how to use their vocational capabilities successfully pursue their retirement life, the majority of people are employed in temporary jobs, and some are successful in post-retirement life due to their successful use of training received. Some live on pensions without engaging in any income-generating activities, and those who retire after 12 years of service are facing severe economic hardships as they do not get pensions. Although they have received training in various fields, they do not use them for their benefit due to lack of proper guidance. Although the government implements programs, they are not clearly aware of them. Barriers to utilization of training include an absence of a system to identify the professional skills of retired soldiers, interest in civil society affairs, exploration of opportunities in the civil and private sectors, and politicization of services. If they are given the opportunity, they will be able to contribute to the development and reconstruction process. The findings of the study further show that it has many social, economic, political, and psychological benefits not only for individuals but also for a country. Entrepreneurship training for all retired soldiers, improving officers' understanding, streamlining existing mechanisms, creating new mechanisms, setting up a separate unit for retirees, and adapting them to civil society, private and non-governmental contributions, and training courses can be identified as potential means to improve the current situation.

Keywords: development, reconstruction, retired soldiers, vocational capabilities

Procedia PDF Downloads 102
279 Computational Team Dynamics and Interaction Patterns in New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.

Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams

Procedia PDF Downloads 49
278 Artificial Intelligence in Management Simulators

Authors: Nuno Biga

Abstract:

Artificial Intelligence (AI) allows machines to interpret information and learn from context analysis, giving them the ability to make predictions adjusted to each specific situation. In addition to learning by performing deterministic and probabilistic calculations, the 'artificial brain' also learns through information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) that provides users with useful suggestions, namely to pursue the following operations: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time the bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed in a pilot project. Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of this information is materialised in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" that players can use during the Game. Each participant in the Virtual Assisted-BIGAMES permanently asks himself about the decisions he should make during the game in order to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, and as the participants gain a better understanding of the game, they will more easily dispense with the VA's recommendations and rely more on their own experience, capability, and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator (Serious Game Controller) is responsible for supporting the players with further analysis and the recommended action may be (or not) aligned with the previous recommendations of the VA. All the information should be jointly analysed and assessed by each player, who are expected to add “Emotional Intelligence”, a component absent from the machine learning process.

Keywords: artificial intelligence (AI), gamification, key performance indicators (KPI), machine learning, management simulators, serious games, virtual assistant

Procedia PDF Downloads 81
277 Popular Modern Devotional Prints: The Construction of Identity between the Visual and Viewer in Public Interaction Spaces

Authors: Muhammad Asghar, Muhammad Ali, Farwah Batool

Abstract:

Despite the general belief in Islam that figural representations should be avoided, particularly propagated by the Deobandis, a religious group influenced by Salafi and Wahhabi ideas, nevertheless the public interaction spaces such as Shops and offices are decorated with popular, mass-produced, modern devotional prints. This study seeks to focus on popular visual culture, its display in public interaction places such as shops and discusses how people establish relationships with images. The method adopted was basically ethnographic: to describe as precisely and completely as possible the phenomena to be studied, using the language and conceptual categories of the interlocutors themselves. This study has been enriched by ethnographic field research conducted during the months from October to December 2015 in the major cities of Punjab and their brief forays and surroundings where we explored how seeing upon images performs religious identity within the public space. The study examines the pattern of aesthetics and taste in the shops of especially common people whose sensibilities have not been refined or influenced by being exposed to any narrative or fine arts. Furthermore, it is our intention to question the general beliefs and opinions in the context of popular practices, the way in which people relate to these prints. The interpretations and analyses presented in this study illuminate how people create meaning through the display of such items of material culture in the immediate settings of their spaces. This study also seeks to demonstrate how popular Islam is practiced, transformed and understood through the display of popular representations of popular figures of piety like Sufi saints or their shrines are important to many believers and thus occupy important places in their shops. The findings are supported with empirical evidence and based on interviews with the shopkeepers, owners and office employees. Looking upon those popular modern devotional prints keeps people’s reverence of the personages alive. Because of their sacred themes they affect a relationship between the saint and the beholders as well as serve to symbolize and reinforce their belief since they become powerful loci of emotional attachment. Collectively such devotional prints satisfy a local taste to help people establish contact with God through the saints’ intercession in order to receive protection and benediction, and help in spiritual, mental and material problems. By putting all these facets of belief together we gain an insight into both the subjective and cognizant role that icons’ of saints play in the lives of believers. Their veneration through ingeniously contrived modern means of production makes a significant contribution to an understanding of how such imagery promotes a powerful belief in Sufi saints, which ultimately gives indications of how popular Islam is practiced and understood at its gross roots level.

Keywords: ethnographic field research, popular visual culture, protected space, religious identity

Procedia PDF Downloads 207
276 Identification, Synthesis, and Biological Evaluation of the Major Human Metabolite of NLRP3 Inflammasome Inhibitor MCC950

Authors: Manohar Salla, Mark S. Butler, Ruby Pelingon, Geraldine Kaeslin, Daniel E. Croker, Janet C. Reid, Jong Min Baek, Paul V. Bernhardt, Elizabeth M. J. Gillam, Matthew A. Cooper, Avril A. B. Robertson

Abstract:

MCC950 is a potent and selective inhibitor of the NOD-like receptor pyrin domain-containing protein 3 (NLRP3) inflammasome that shows early promise for treatment of inflammatory diseases. The identification of major metabolites of lead molecule is an important step during drug development process. It provides an information about the metabolically labile sites in the molecule and thereby helping medicinal chemists to design metabolically stable molecules. To identify major metabolites of MCC950, the compound was incubated with human liver microsomes and subsequent analysis by (+)- and (−)-QTOF-ESI-MS/MS revealed a major metabolite formed due to hydroxylation on 1,2,3,5,6,7-hexahydro-s-indacene moiety of MCC950. This major metabolite can lose two water molecules and three possible regioisomers were synthesized. Co-elution of major metabolite with each of the synthesized compounds using HPLC-ESI-SRM-MS/MS revealed the structure of the metabolite (±) N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. Subsequent synthesis of individual enantiomers and coelution in HPLC-ESI-SRM-MS/MS using a chiral column revealed the metabolite was R-(+)- N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. To study the possible cytochrome P450 enzyme(s) responsible for the formation of major metabolite, MCC950 was incubated with a panel of cytochrome P450 enzymes. The result indicated that CYP1A2, CYP2A6, CYP2B6, CYP2C9, CYP2C18, CYP2C19, CYP2J2 and CYP3A4 are most likely responsible for the formation of the major metabolite. The biological activity of the major metabolite and the other synthesized regioisomers was also investigated by screening for for NLRP3 inflammasome inhibitory activity and cytotoxicity. The major metabolite had 170-fold less inhibitory activity (IC50-1238 nM) than MCC950 (IC50-7.5 nM). Interestingly, one regioisomer had shown nanomolar inhibitory activity (IC50-232 nM). However, no evidence of cytotoxicity was observed with any of these synthesized compounds when tested in human embryonic kidney 293 cells (HEK293) and human liver hepatocellular carcinoma G2 cells (HepG2). These key findings give an insight into the SAR of the hexahydroindacene moiety of MCC950 and reveal a metabolic soft spot which could be blocked by chemical modification.

Keywords: Cytochrome P450, inflammasome, MCC950, metabolite, microsome, NLRP3

Procedia PDF Downloads 225
275 Blending Synchronous with Asynchronous Learning Tools: Students’ Experiences and Preferences for Online Learning Environment in a Resource-Constrained Higher Education Situations in Uganda

Authors: Stephen Kyakulumbye, Vivian Kobusingye

Abstract:

Generally, World over, COVID-19 has had adverse effects on all sectors but with more debilitating effects on the education sector. After reactive lockdowns, education institutions that could continue teaching and learning had to go a distance mediated by digital technological tools. In Uganda, the Ministry of Education thereby issued COVID-19 Online Distance E-learning (ODeL) emergent guidelines. Despite such guidelines, academic institutions in Uganda and similar developing contexts with academically constrained resource environments were caught off-guard and ill-prepared to transform from face-to-face learning to online distance learning mode. Most academic institutions that migrated spontaneously did so with no deliberate tools, systems, strategies, or software to cause active, meaningful, and engaging learning for students. By experience, most of these academic institutions shifted to Zoom and WhatsApp and instead conducted online teaching in real-time than blended synchronous and asynchronous tools. This paper provides students’ experiences while blending synchronous and asynchronous content-creating and learning tools within a technological resource-constrained environment to navigate in such a challenging Uganda context. These conceptual case-based findings, using experience from Uganda Christian University (UCU), point at the design of learning activities with two certain characteristics, the enhancement of synchronous learning technologies with asynchronous ones to mitigate the challenge of system breakdown, passive learning to active learning, and enhances the types of presence (social, cognitive and facilitatory). The paper, both empirical and experiential in nature, uses online experiences from third-year students in Bachelor of Business Administration student lectured using asynchronous text, audio, and video created with Open Broadcaster Studio software and compressed with Handbrake, all open-source software to mitigate disk space and bandwidth usage challenges. The synchronous online engagements with students were a blend of zoom or BigBlueButton, to ensure that students had an alternative just in case one failed due to excessive real-time traffic. Generally, students report that compared to their previous face-to-face lectures, the pre-recorded lectures via Youtube provided them an opportunity to reflect on content in a self-paced manner, which later on enabled them to engage actively during the live zoom and/or BigBlueButton real-time discussions and presentations. The major recommendation is that lecturers and teachers in a resource-constrained environment with limited digital resources like the internet and digital devices should harness this approach to offer students access to learning content in a self-paced manner and thereby enabling reflective active learning through reflective and high-order thinking.

Keywords: synchronous learning, asynchronous learning, active learning, reflective learning, resource-constrained environment

Procedia PDF Downloads 108
274 A Risk-Based Modeling Approach for Successful Adoption of CAATTs in Audits: An Exploratory Study Applied to Israeli Accountancy Firms

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

Technology adoption models are extensively used in the literature to explore drivers and inhibitors affecting the adoption of Computer Assisted Audit Techniques and Tools (CAATTs). Further studies from recent years suggested additional factors that may affect technology adoption by CPA firms. However, the adoption of CAATTs by financial auditors differs from the adoption of technologies in other industries. This is a result of the unique characteristics of the auditing process, which are expressed in the audit risk elements and the risk-based auditing approach, as encoded in the auditing standards. Since these audit risk factors are not part of the existing models that are used to explain technology adoption, these models do not fully correspond to the specific needs and requirements of the auditing domain. The overarching objective of this qualitative research is to fill the gap in the literature, which exists as a result of using generic technology adoption models. Followed by a pretest and based on semi-structured in-depth interviews with 16 Israeli CPA firms of different sizes, this study aims to reveal determinants related to audit risk factors that influence the adoption of CAATTs in audits and proposes a new modeling approach for the successful adoption of CAATTs. The findings emphasize several important aspects: (1) while large CPA firms developed their own inner guidelines to assess the audit risk components, other CPA firms do not follow a formal and validated methodology to evaluate these risks; (2) large firms incorporate a variety of CAATTs, including self-developed advanced tools. On the other hand, small and mid-sized CPA firms incorporate standard CAATTs and still need to catch up to better understand what CAATTs can offer and how they can contribute to the quality of the audit; (3) the top management of mid-sized and small CPA firms should be more proactive and updated about CAATTs capabilities and contributions to audits; and (4) All CPA firms consider professionalism as a major challenge that must be constantly managed to ensure an optimal CAATTs operation. The study extends the existing knowledge of CAATTs adoption by looking at it from a risk-based auditing approach. It suggests a new model for CAATTs adoption by incorporating influencing audit risk factors that auditors should examine when considering CAATTs adoption. Since the model can be used in various audited scenarios and supports strategic, risk-based decisions, it maximizes the great potential of CAATTs on the quality of the audits. The results and insights can be useful to CPA firms, internal auditors, CAATTs developers and regulators. Moreover, it may motivate audit standard-setters to issue updated guidelines regarding CAATTs adoption in audits.

Keywords: audit risk, CAATTs, financial auditing, information technology, technology adoption models

Procedia PDF Downloads 46